Test Report: KVM_Linux_containerd 20242

                    
                      454e3a8af9229d80194750b761a4b9142724e045:2025-01-20:37993
                    
                

Test fail (1/324)

Order failed test Duration
360 TestStartStop/group/embed-certs/serial/SecondStart 1622.74
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1622.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:00:16.717686 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.368324 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.374833 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.386364 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.407921 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.449448 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.531007 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.692542 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:23.014386 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:23.656528 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:24.938484 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:27.401984 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:27.500470 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: signal: killed (27m0.412647973s)

                                                
                                                
-- stdout --
	* [embed-certs-553677] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-553677" primary control-plane node in "embed-certs-553677" cluster
	* Restarting existing kvm2 VM for "embed-certs-553677" ...
	* Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-553677 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:00:09.136331 1060798 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:00:09.136455 1060798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:00:09.136464 1060798 out.go:358] Setting ErrFile to fd 2...
	I0120 14:00:09.136469 1060798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:00:09.136684 1060798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 14:00:09.137260 1060798 out.go:352] Setting JSON to false
	I0120 14:00:09.138235 1060798 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13351,"bootTime":1737368258,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:00:09.138350 1060798 start.go:139] virtualization: kvm guest
	I0120 14:00:09.140578 1060798 out.go:177] * [embed-certs-553677] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:00:09.142079 1060798 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:00:09.142074 1060798 notify.go:220] Checking for updates...
	I0120 14:00:09.143562 1060798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:00:09.144993 1060798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:00:09.146404 1060798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 14:00:09.147692 1060798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:00:09.148998 1060798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:00:09.150691 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:00:09.151141 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:00:09.151189 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:00:09.166623 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0120 14:00:09.167122 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:00:09.167718 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:00:09.167742 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:00:09.168137 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:00:09.168428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:09.168757 1060798 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:00:09.169233 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:00:09.169310 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:00:09.184559 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0120 14:00:09.185140 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:00:09.185701 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:00:09.185731 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:00:09.186076 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:00:09.186290 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:09.227893 1060798 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:00:09.229385 1060798 start.go:297] selected driver: kvm2
	I0120 14:00:09.229408 1060798 start.go:901] validating driver "kvm2" against &{Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:00:09.229531 1060798 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:00:09.230237 1060798 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:00:09.230337 1060798 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-998973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:00:09.247147 1060798 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:00:09.247587 1060798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:00:09.247634 1060798 cni.go:84] Creating CNI manager for ""
	I0120 14:00:09.247685 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:00:09.247721 1060798 start.go:340] cluster config:
	{Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:00:09.247834 1060798 iso.go:125] acquiring lock: {Name:mk63965bcac7e5d2166c667dd03e4270f636bd53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:00:09.249884 1060798 out.go:177] * Starting "embed-certs-553677" primary control-plane node in "embed-certs-553677" cluster
	I0120 14:00:09.251254 1060798 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 14:00:09.251313 1060798 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 14:00:09.251326 1060798 cache.go:56] Caching tarball of preloaded images
	I0120 14:00:09.251426 1060798 preload.go:172] Found /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0120 14:00:09.251437 1060798 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 14:00:09.251541 1060798 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/config.json ...
	I0120 14:00:09.251743 1060798 start.go:360] acquireMachinesLock for embed-certs-553677: {Name:mk36ae0f7b2d42a8734a6403f72836860fc4ccfa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:00:25.130435 1060798 start.go:364] duration metric: took 15.878602581s to acquireMachinesLock for "embed-certs-553677"
	I0120 14:00:25.130512 1060798 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:00:25.130525 1060798 fix.go:54] fixHost starting: 
	I0120 14:00:25.130961 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:00:25.131024 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:00:25.151812 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I0120 14:00:25.152266 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:00:25.152822 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:00:25.152854 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:00:25.153234 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:00:25.153468 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:25.153642 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:00:25.155461 1060798 fix.go:112] recreateIfNeeded on embed-certs-553677: state=Stopped err=<nil>
	I0120 14:00:25.155490 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	W0120 14:00:25.155656 1060798 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:00:25.158121 1060798 out.go:177] * Restarting existing kvm2 VM for "embed-certs-553677" ...
	I0120 14:00:25.159720 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Start
	I0120 14:00:25.159943 1060798 main.go:141] libmachine: (embed-certs-553677) starting domain...
	I0120 14:00:25.159967 1060798 main.go:141] libmachine: (embed-certs-553677) ensuring networks are active...
	I0120 14:00:25.160789 1060798 main.go:141] libmachine: (embed-certs-553677) Ensuring network default is active
	I0120 14:00:25.161303 1060798 main.go:141] libmachine: (embed-certs-553677) Ensuring network mk-embed-certs-553677 is active
	I0120 14:00:25.161800 1060798 main.go:141] libmachine: (embed-certs-553677) getting domain XML...
	I0120 14:00:25.162593 1060798 main.go:141] libmachine: (embed-certs-553677) creating domain...
	I0120 14:00:26.523284 1060798 main.go:141] libmachine: (embed-certs-553677) waiting for IP...
	I0120 14:00:26.524408 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:26.524955 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:26.525074 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:26.524944 1060911 retry.go:31] will retry after 222.778825ms: waiting for domain to come up
	I0120 14:00:26.749767 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:26.750528 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:26.750560 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:26.750483 1060911 retry.go:31] will retry after 239.249302ms: waiting for domain to come up
	I0120 14:00:26.991082 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:26.991790 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:26.991837 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:26.991741 1060911 retry.go:31] will retry after 416.399646ms: waiting for domain to come up
	I0120 14:00:27.844878 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:27.845488 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:27.845517 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:27.845470 1060911 retry.go:31] will retry after 470.570909ms: waiting for domain to come up
	I0120 14:00:28.318025 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:28.318569 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:28.318616 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:28.318550 1060911 retry.go:31] will retry after 725.900803ms: waiting for domain to come up
	I0120 14:00:29.046621 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:29.047238 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:29.047263 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:29.047192 1060911 retry.go:31] will retry after 590.863404ms: waiting for domain to come up
	I0120 14:00:29.639513 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:29.640030 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:29.640060 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:29.639986 1060911 retry.go:31] will retry after 779.536692ms: waiting for domain to come up
	I0120 14:00:30.421805 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:30.422403 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:30.422464 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:30.422385 1060911 retry.go:31] will retry after 1.137826076s: waiting for domain to come up
	I0120 14:00:31.561820 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:31.562422 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:31.562449 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:31.562392 1060911 retry.go:31] will retry after 1.724582419s: waiting for domain to come up
	I0120 14:00:33.289526 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:33.290221 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:33.290253 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:33.290164 1060911 retry.go:31] will retry after 1.979389937s: waiting for domain to come up
	I0120 14:00:35.271040 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:35.271737 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:35.271771 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:35.271698 1060911 retry.go:31] will retry after 2.702719811s: waiting for domain to come up
	I0120 14:00:37.975637 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:37.976177 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:37.976205 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:37.976144 1060911 retry.go:31] will retry after 2.907988017s: waiting for domain to come up
	I0120 14:00:40.886071 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:40.886547 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
	I0120 14:00:40.886579 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:40.886505 1060911 retry.go:31] will retry after 3.55226413s: waiting for domain to come up
	I0120 14:00:44.788861 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.789567 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has current primary IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.789606 1060798 main.go:141] libmachine: (embed-certs-553677) found domain IP: 192.168.72.136
	I0120 14:00:44.789620 1060798 main.go:141] libmachine: (embed-certs-553677) reserving static IP address...
	I0120 14:00:44.790314 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "embed-certs-553677", mac: "52:54:00:7d:7a:fd", ip: "192.168.72.136"} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:44.790340 1060798 main.go:141] libmachine: (embed-certs-553677) reserved static IP address 192.168.72.136 for domain embed-certs-553677
	I0120 14:00:44.790367 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | skip adding static IP to network mk-embed-certs-553677 - found existing host DHCP lease matching {name: "embed-certs-553677", mac: "52:54:00:7d:7a:fd", ip: "192.168.72.136"}
	I0120 14:00:44.790394 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Getting to WaitForSSH function...
	I0120 14:00:44.790407 1060798 main.go:141] libmachine: (embed-certs-553677) waiting for SSH...
	I0120 14:00:44.794659 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.795095 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:44.795127 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.795243 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Using SSH client type: external
	I0120 14:00:44.795270 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa (-rw-------)
	I0120 14:00:44.795309 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:00:44.795325 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | About to run SSH command:
	I0120 14:00:44.795362 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | exit 0
	I0120 14:00:44.930778 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | SSH cmd err, output: <nil>: 
	I0120 14:00:44.931282 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetConfigRaw
	I0120 14:00:44.932172 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
	I0120 14:00:44.935918 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.936516 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:44.936563 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.936656 1060798 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/config.json ...
	I0120 14:00:44.936929 1060798 machine.go:93] provisionDockerMachine start ...
	I0120 14:00:44.936952 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:44.937262 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:44.939866 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.940268 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:44.940317 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:44.940438 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:44.940624 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:44.940796 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:44.940995 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:44.941199 1060798 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:44.941385 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0120 14:00:44.941397 1060798 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:00:45.062836 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:00:45.062872 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetMachineName
	I0120 14:00:45.063173 1060798 buildroot.go:166] provisioning hostname "embed-certs-553677"
	I0120 14:00:45.063204 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetMachineName
	I0120 14:00:45.063428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.066583 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.067062 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.067089 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.067223 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:45.067440 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.067602 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.067748 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:45.067976 1060798 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:45.068183 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0120 14:00:45.068200 1060798 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-553677 && echo "embed-certs-553677" | sudo tee /etc/hostname
	I0120 14:00:45.199195 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-553677
	
	I0120 14:00:45.199230 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.202583 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.203009 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.203041 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.203214 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:45.203458 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.203698 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.203867 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:45.204107 1060798 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:45.204400 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0120 14:00:45.204433 1060798 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-553677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-553677/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-553677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:00:45.326926 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:00:45.326969 1060798 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-998973/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-998973/.minikube}
	I0120 14:00:45.326996 1060798 buildroot.go:174] setting up certificates
	I0120 14:00:45.327009 1060798 provision.go:84] configureAuth start
	I0120 14:00:45.327023 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetMachineName
	I0120 14:00:45.327381 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
	I0120 14:00:45.330599 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.331028 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.331067 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.331303 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.333924 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.334385 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.334439 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.334549 1060798 provision.go:143] copyHostCerts
	I0120 14:00:45.334623 1060798 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem, removing ...
	I0120 14:00:45.334647 1060798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem
	I0120 14:00:45.334718 1060798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem (1082 bytes)
	I0120 14:00:45.334848 1060798 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem, removing ...
	I0120 14:00:45.334865 1060798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem
	I0120 14:00:45.334896 1060798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem (1123 bytes)
	I0120 14:00:45.334980 1060798 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem, removing ...
	I0120 14:00:45.334991 1060798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem
	I0120 14:00:45.335017 1060798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem (1675 bytes)
	I0120 14:00:45.335085 1060798 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-553677 san=[127.0.0.1 192.168.72.136 embed-certs-553677 localhost minikube]
	I0120 14:00:45.559381 1060798 provision.go:177] copyRemoteCerts
	I0120 14:00:45.559445 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:00:45.559475 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.562152 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.562469 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.562506 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.562677 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:45.562897 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.563020 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:45.563240 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:00:45.652039 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:00:45.680567 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 14:00:45.708749 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:00:45.736457 1060798 provision.go:87] duration metric: took 409.40887ms to configureAuth
	I0120 14:00:45.736502 1060798 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:00:45.736743 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:00:45.736759 1060798 machine.go:96] duration metric: took 799.816175ms to provisionDockerMachine
	I0120 14:00:45.736767 1060798 start.go:293] postStartSetup for "embed-certs-553677" (driver="kvm2")
	I0120 14:00:45.736781 1060798 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:00:45.736824 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:45.737243 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:00:45.737276 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.740300 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.740827 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.740864 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.741093 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:45.741357 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.741522 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:45.741710 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:00:45.828630 1060798 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:00:45.833818 1060798 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:00:45.833872 1060798 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/addons for local assets ...
	I0120 14:00:45.833963 1060798 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/files for local assets ...
	I0120 14:00:45.834099 1060798 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem -> 10062632.pem in /etc/ssl/certs
	I0120 14:00:45.834268 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:00:45.845164 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /etc/ssl/certs/10062632.pem (1708 bytes)
	I0120 14:00:45.876226 1060798 start.go:296] duration metric: took 139.437685ms for postStartSetup
	I0120 14:00:45.876281 1060798 fix.go:56] duration metric: took 20.745757423s for fixHost
	I0120 14:00:45.876315 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.879709 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.880097 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.880131 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.880347 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:45.880589 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.880755 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:45.880989 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:45.881164 1060798 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:45.881369 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.136 22 <nil> <nil>}
	I0120 14:00:45.881385 1060798 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:00:45.994287 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381645.965468301
	
	I0120 14:00:45.994315 1060798 fix.go:216] guest clock: 1737381645.965468301
	I0120 14:00:45.994326 1060798 fix.go:229] Guest: 2025-01-20 14:00:45.965468301 +0000 UTC Remote: 2025-01-20 14:00:45.876285295 +0000 UTC m=+36.780783009 (delta=89.183006ms)
	I0120 14:00:45.994371 1060798 fix.go:200] guest clock delta is within tolerance: 89.183006ms
	I0120 14:00:45.994379 1060798 start.go:83] releasing machines lock for "embed-certs-553677", held for 20.863898065s
	I0120 14:00:45.994409 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:45.994700 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
	I0120 14:00:45.997789 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.998225 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:45.998251 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:45.998493 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:45.999097 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:45.999284 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:00:45.999347 1060798 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:00:45.999411 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:45.999587 1060798 ssh_runner.go:195] Run: cat /version.json
	I0120 14:00:45.999630 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:00:46.002787 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:46.003148 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:46.003274 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:46.003302 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:46.003554 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:46.003577 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:46.003622 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:46.003778 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:00:46.003873 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:46.003989 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:00:46.004048 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:46.004280 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:00:46.004284 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:00:46.004447 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:00:46.087485 1060798 ssh_runner.go:195] Run: systemctl --version
	I0120 14:00:46.115664 1060798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:00:46.123518 1060798 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:00:46.123609 1060798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:00:46.147126 1060798 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:00:46.147166 1060798 start.go:495] detecting cgroup driver to use...
	I0120 14:00:46.147253 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 14:00:46.182494 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 14:00:46.200915 1060798 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:00:46.201014 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:00:46.218855 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:00:46.235015 1060798 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:00:46.368546 1060798 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:00:46.535139 1060798 docker.go:233] disabling docker service ...
	I0120 14:00:46.535226 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:00:46.551928 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:00:46.569189 1060798 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:00:46.721501 1060798 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:00:46.870799 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:00:46.888859 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:00:46.922800 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 14:00:46.935631 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 14:00:46.947299 1060798 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 14:00:46.947365 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 14:00:46.959181 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:00:46.971239 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 14:00:46.982931 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:00:46.994688 1060798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:00:47.006568 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 14:00:47.018188 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 14:00:47.029555 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 14:00:47.042008 1060798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:00:47.053847 1060798 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:00:47.053914 1060798 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:00:47.068557 1060798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:00:47.079724 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:00:47.244050 1060798 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 14:00:47.286700 1060798 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 14:00:47.286783 1060798 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 14:00:47.293695 1060798 retry.go:31] will retry after 1.046860485s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0120 14:00:48.340998 1060798 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 14:00:48.348295 1060798 start.go:563] Will wait 60s for crictl version
	I0120 14:00:48.348362 1060798 ssh_runner.go:195] Run: which crictl
	I0120 14:00:48.353005 1060798 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:00:48.401857 1060798 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0120 14:00:48.401945 1060798 ssh_runner.go:195] Run: containerd --version
	I0120 14:00:48.436624 1060798 ssh_runner.go:195] Run: containerd --version
	I0120 14:00:48.469764 1060798 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	I0120 14:00:48.471367 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
	I0120 14:00:48.474978 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:48.475421 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:00:48.475451 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:00:48.475767 1060798 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 14:00:48.481387 1060798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:00:48.496680 1060798 kubeadm.go:883] updating cluster {Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:00:48.496831 1060798 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 14:00:48.496943 1060798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:00:48.543621 1060798 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:00:48.543650 1060798 containerd.go:534] Images already preloaded, skipping extraction
	I0120 14:00:48.543720 1060798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:00:48.583058 1060798 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:00:48.583091 1060798 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:00:48.583102 1060798 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.32.0 containerd true true} ...
	I0120 14:00:48.583248 1060798 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-553677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:00:48.583324 1060798 ssh_runner.go:195] Run: sudo crictl info
	I0120 14:00:48.626717 1060798 cni.go:84] Creating CNI manager for ""
	I0120 14:00:48.626749 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:00:48.626763 1060798 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:00:48.626794 1060798 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-553677 NodeName:embed-certs-553677 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:00:48.626939 1060798 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-553677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.136"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:00:48.627014 1060798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:00:48.638594 1060798 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:00:48.638682 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:00:48.649443 1060798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0120 14:00:48.672682 1060798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:00:48.693688 1060798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
	I0120 14:00:48.714789 1060798 ssh_runner.go:195] Run: grep 192.168.72.136	control-plane.minikube.internal$ /etc/hosts
	I0120 14:00:48.719444 1060798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:00:48.733671 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:00:48.868720 1060798 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:00:48.892448 1060798 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677 for IP: 192.168.72.136
	I0120 14:00:48.892480 1060798 certs.go:194] generating shared ca certs ...
	I0120 14:00:48.892506 1060798 certs.go:226] acquiring lock for ca certs: {Name:mk3b53704e4ec52de26582ed9269b5c3b0eb7914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:00:48.892707 1060798 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key
	I0120 14:00:48.892774 1060798 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key
	I0120 14:00:48.892792 1060798 certs.go:256] generating profile certs ...
	I0120 14:00:48.892917 1060798 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/client.key
	I0120 14:00:48.893048 1060798 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/apiserver.key.4b39fe5c
	I0120 14:00:48.893105 1060798 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/proxy-client.key
	I0120 14:00:48.893271 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem (1338 bytes)
	W0120 14:00:48.893313 1060798 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263_empty.pem, impossibly tiny 0 bytes
	I0120 14:00:48.893327 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:00:48.893365 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:00:48.893403 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:00:48.893435 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem (1675 bytes)
	I0120 14:00:48.893489 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem (1708 bytes)
	I0120 14:00:48.894289 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:00:48.942535 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 14:00:48.981045 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:00:49.024866 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:00:49.064664 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 14:00:49.111059 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:00:49.154084 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:00:49.196268 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 14:00:49.224461 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem --> /usr/share/ca-certificates/1006263.pem (1338 bytes)
	I0120 14:00:49.257755 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /usr/share/ca-certificates/10062632.pem (1708 bytes)
	I0120 14:00:49.291363 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:00:49.325873 1060798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:00:49.348619 1060798 ssh_runner.go:195] Run: openssl version
	I0120 14:00:49.358463 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10062632.pem && ln -fs /usr/share/ca-certificates/10062632.pem /etc/ssl/certs/10062632.pem"
	I0120 14:00:49.373474 1060798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10062632.pem
	I0120 14:00:49.379380 1060798 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:56 /usr/share/ca-certificates/10062632.pem
	I0120 14:00:49.379466 1060798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10062632.pem
	I0120 14:00:49.386420 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10062632.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:00:49.400887 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:00:49.416345 1060798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:00:49.422379 1060798 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:48 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:00:49.422463 1060798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:00:49.431905 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:00:49.446192 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006263.pem && ln -fs /usr/share/ca-certificates/1006263.pem /etc/ssl/certs/1006263.pem"
	I0120 14:00:49.464845 1060798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006263.pem
	I0120 14:00:49.470841 1060798 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:56 /usr/share/ca-certificates/1006263.pem
	I0120 14:00:49.470936 1060798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006263.pem
	I0120 14:00:49.477897 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006263.pem /etc/ssl/certs/51391683.0"
	I0120 14:00:49.493285 1060798 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:00:49.499356 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:00:49.512066 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:00:49.520694 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:00:49.528307 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:00:49.537554 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:00:49.547409 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:00:49.554863 1060798 kubeadm.go:392] StartCluster: {Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:00:49.554982 1060798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 14:00:49.555058 1060798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:00:49.601305 1060798 cri.go:89] found id: "ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95"
	I0120 14:00:49.601340 1060798 cri.go:89] found id: "8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e"
	I0120 14:00:49.601346 1060798 cri.go:89] found id: "c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88"
	I0120 14:00:49.601352 1060798 cri.go:89] found id: "773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768"
	I0120 14:00:49.601356 1060798 cri.go:89] found id: "b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8"
	I0120 14:00:49.601361 1060798 cri.go:89] found id: "67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd"
	I0120 14:00:49.601365 1060798 cri.go:89] found id: "6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa"
	I0120 14:00:49.601370 1060798 cri.go:89] found id: "43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23"
	I0120 14:00:49.601373 1060798 cri.go:89] found id: ""
	I0120 14:00:49.601430 1060798 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 14:00:49.618071 1060798 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T14:00:49Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 14:00:49.618176 1060798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:00:49.631147 1060798 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:00:49.631234 1060798 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:00:49.631307 1060798 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:00:49.642306 1060798 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:00:49.643040 1060798 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-553677" does not appear in /home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:00:49.643304 1060798 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-998973/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-553677" cluster setting kubeconfig missing "embed-certs-553677" context setting]
	I0120 14:00:49.643720 1060798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:00:49.645261 1060798 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:00:49.657306 1060798 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
	I0120 14:00:49.657348 1060798 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:00:49.657367 1060798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0120 14:00:49.657431 1060798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:00:49.702230 1060798 cri.go:89] found id: "ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95"
	I0120 14:00:49.702257 1060798 cri.go:89] found id: "8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e"
	I0120 14:00:49.702260 1060798 cri.go:89] found id: "c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88"
	I0120 14:00:49.702264 1060798 cri.go:89] found id: "773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768"
	I0120 14:00:49.702267 1060798 cri.go:89] found id: "b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8"
	I0120 14:00:49.702270 1060798 cri.go:89] found id: "67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd"
	I0120 14:00:49.702272 1060798 cri.go:89] found id: "6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa"
	I0120 14:00:49.702274 1060798 cri.go:89] found id: "43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23"
	I0120 14:00:49.702277 1060798 cri.go:89] found id: ""
	I0120 14:00:49.702283 1060798 cri.go:252] Stopping containers: [ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95 8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88 773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768 b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8 67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd 6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa 43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23]
	I0120 14:00:49.702351 1060798 ssh_runner.go:195] Run: which crictl
	I0120 14:00:49.707421 1060798 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95 8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88 773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768 b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8 67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd 6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa 43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23
	I0120 14:00:49.757026 1060798 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:00:49.776829 1060798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:00:49.790392 1060798 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:00:49.790434 1060798 kubeadm.go:157] found existing configuration files:
	
	I0120 14:00:49.790525 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:00:49.802002 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:00:49.802105 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:00:49.813781 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:00:49.828281 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:00:49.828375 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:00:49.843993 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:00:49.858174 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:00:49.858259 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:00:49.870757 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:00:49.882769 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:00:49.882867 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:00:49.895507 1060798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:00:49.908298 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:50.083446 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:50.896086 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:51.147259 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:51.224080 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:51.332246 1060798 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:00:51.332382 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:51.832815 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:52.332689 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:52.833065 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:52.861484 1060798 api_server.go:72] duration metric: took 1.52923944s to wait for apiserver process to appear ...
	I0120 14:00:52.861523 1060798 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:00:52.861555 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:52.862202 1060798 api_server.go:269] stopped: https://192.168.72.136:8443/healthz: Get "https://192.168.72.136:8443/healthz": dial tcp 192.168.72.136:8443: connect: connection refused
	I0120 14:00:53.361875 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:55.730960 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:00:55.730999 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:00:55.731015 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:55.749785 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:00:55.749821 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:00:55.862208 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:55.915710 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:55.915742 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:56.362222 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:56.388494 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:56.388539 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:56.862160 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:56.870469 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:56.870580 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:57.362195 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:00:57.381451 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 200:
	ok
	I0120 14:00:57.395915 1060798 api_server.go:141] control plane version: v1.32.0
	I0120 14:00:57.395970 1060798 api_server.go:131] duration metric: took 4.534437824s to wait for apiserver health ...
	I0120 14:00:57.396008 1060798 cni.go:84] Creating CNI manager for ""
	I0120 14:00:57.396022 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:00:57.397786 1060798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:00:57.399309 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:00:57.420248 1060798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:00:57.452035 1060798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:00:57.468438 1060798 system_pods.go:59] 8 kube-system pods found
	I0120 14:00:57.468489 1060798 system_pods.go:61] "coredns-668d6bf9bc-97dc2" [c98d0167-7d4e-43f0-be8d-dc702847de79] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:00:57.468504 1060798 system_pods.go:61] "etcd-embed-certs-553677" [640370fc-478b-4dd1-b546-634a1077cf6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:00:57.468514 1060798 system_pods.go:61] "kube-apiserver-embed-certs-553677" [6d0da8ff-1d58-4b5b-88bb-8fa374a996a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:00:57.468521 1060798 system_pods.go:61] "kube-controller-manager-embed-certs-553677" [d415449a-97cd-4663-8351-90dd1820cbfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:00:57.468529 1060798 system_pods.go:61] "kube-proxy-rs2x7" [23dba39c-292b-4df7-8d84-adf6233df385] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:00:57.468537 1060798 system_pods.go:61] "kube-scheduler-embed-certs-553677" [9e13df4f-f97d-4049-b460-bbf09bcaee47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:00:57.468593 1060798 system_pods.go:61] "metrics-server-f79f97bbb-5mwxz" [c190f5c5-67c1-4175-8677-62f6465c91da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:00:57.468604 1060798 system_pods.go:61] "storage-provisioner" [0588ceec-e063-45d6-9442-16c4d66afad3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:00:57.468612 1060798 system_pods.go:74] duration metric: took 16.547569ms to wait for pod list to return data ...
	I0120 14:00:57.468620 1060798 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:00:57.474898 1060798 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:00:57.474951 1060798 node_conditions.go:123] node cpu capacity is 2
	I0120 14:00:57.474963 1060798 node_conditions.go:105] duration metric: took 6.338427ms to run NodePressure ...
	I0120 14:00:57.474990 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:57.856387 1060798 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 14:00:57.864423 1060798 kubeadm.go:739] kubelet initialised
	I0120 14:00:57.864453 1060798 kubeadm.go:740] duration metric: took 8.036091ms waiting for restarted kubelet to initialise ...
	I0120 14:00:57.864465 1060798 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:00:57.872764 1060798 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:59.882354 1060798 pod_ready.go:103] pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:01.885356 1060798 pod_ready.go:103] pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:03.881069 1060798 pod_ready.go:93] pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace has status "Ready":"True"
	I0120 14:01:03.881099 1060798 pod_ready.go:82] duration metric: took 6.008294892s for pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:03.881110 1060798 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:05.388296 1060798 pod_ready.go:93] pod "etcd-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:01:05.388326 1060798 pod_ready.go:82] duration metric: took 1.507208465s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:05.388339 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:07.395728 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:09.396354 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:10.897222 1060798 pod_ready.go:93] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:01:10.897247 1060798 pod_ready.go:82] duration metric: took 5.508900417s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.897258 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.903704 1060798 pod_ready.go:93] pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:01:10.903737 1060798 pod_ready.go:82] duration metric: took 6.470015ms for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.903752 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rs2x7" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.910715 1060798 pod_ready.go:93] pod "kube-proxy-rs2x7" in "kube-system" namespace has status "Ready":"True"
	I0120 14:01:10.910750 1060798 pod_ready.go:82] duration metric: took 6.988172ms for pod "kube-proxy-rs2x7" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.910763 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.917871 1060798 pod_ready.go:93] pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:01:10.917900 1060798 pod_ready.go:82] duration metric: took 7.129507ms for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:10.917910 1060798 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" ...
	I0120 14:01:12.929849 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:15.427535 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:17.925661 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:19.926557 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:22.425890 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:24.926990 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:27.427353 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:29.927746 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:32.427139 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:34.929766 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.427460 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.926373 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.427255 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:44.428207 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:46.924388 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:48.926980 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.426188 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:53.926309 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:55.928068 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.425453 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:00.425835 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:02.552868 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:04.925300 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:06.926078 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:09.428390 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:11.428886 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:13.925379 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:16.425544 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:18.425726 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:20.924433 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:23.425780 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.924945 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.925705 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:30.431285 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:32.924795 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:34.926051 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:36.926685 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:39.425121 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.925316 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:43.925693 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.425212 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:48.425692 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:50.924586 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.425566 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.425685 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.926013 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:00.425153 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:02.924559 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.930297 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.426640 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:09.925548 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:12.424748 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:14.426806 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:16.923938 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:18.925195 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:20.925946 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:23.425061 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:25.925382 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:27.925943 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:30.424777 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.425034 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:34.426763 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:36.925094 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:39.424843 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:41.425799 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:43.925510 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:46.426472 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:48.427287 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:50.927809 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.425189 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:55.428748 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.926914 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:00.426009 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.924936 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:04.927250 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:07.423593 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:09.425157 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.425719 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:13.925414 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:16.426754 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:18.925095 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:20.926956 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:23.425946 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:25.927641 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:28.425557 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:30.426101 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:32.426240 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:34.426618 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:36.427081 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:38.926097 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:41.424924 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:43.425336 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:45.425579 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.926756 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:50.427277 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.925532 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:54.926430 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:57.426323 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.926968 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:02.425700 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:04.925140 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:06.925415 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:08.925905 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:10.918089 1060798 pod_ready.go:82] duration metric: took 4m0.000161453s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" ...
	E0120 14:05:10.918131 1060798 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:05:10.918160 1060798 pod_ready.go:39] duration metric: took 4m13.053682746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:10.918201 1060798 kubeadm.go:597] duration metric: took 4m21.286948978s to restartPrimaryControlPlane
	W0120 14:05:10.918306 1060798 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:05:10.918352 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0120 14:05:12.920615 1060798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.002231911s)
	I0120 14:05:12.920701 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:12.942116 1060798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:12.954775 1060798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:12.966775 1060798 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:12.966807 1060798 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:12.966883 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:12.977602 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:12.977684 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:12.989019 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:13.000820 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:13.000898 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:13.016644 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:13.031439 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:13.031528 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:13.042457 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:13.055593 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:13.055669 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:13.068674 1060798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:05:13.130131 1060798 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:05:13.130201 1060798 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:05:13.252056 1060798 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:05:13.252208 1060798 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:05:13.252350 1060798 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:05:13.262351 1060798 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:05:13.264231 1060798 out.go:235]   - Generating certificates and keys ...
	I0120 14:05:13.264325 1060798 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:05:13.264382 1060798 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:05:13.264450 1060798 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:05:13.264503 1060798 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:05:13.264566 1060798 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:05:13.264617 1060798 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:05:13.264693 1060798 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:05:13.264816 1060798 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:05:13.264980 1060798 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:05:13.265097 1060798 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:05:13.265160 1060798 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:05:13.265250 1060798 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:05:13.376018 1060798 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:05:13.789822 1060798 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:05:13.884391 1060798 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:05:14.207456 1060798 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:05:14.442708 1060798 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:05:14.443884 1060798 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:05:14.447802 1060798 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:05:14.449454 1060798 out.go:235]   - Booting up control plane ...
	I0120 14:05:14.449591 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:05:14.449723 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:05:14.450498 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:05:14.474336 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:05:14.486142 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:05:14.486368 1060798 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:05:14.656630 1060798 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:05:14.656842 1060798 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:05:15.658053 1060798 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001688461s
	I0120 14:05:15.658185 1060798 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:05:21.661193 1060798 kubeadm.go:310] [api-check] The API server is healthy after 6.00301289s
	I0120 14:05:21.679639 1060798 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:05:21.697225 1060798 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:05:21.729640 1060798 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:05:21.730176 1060798 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-553677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:05:21.743570 1060798 kubeadm.go:310] [bootstrap-token] Using token: qgu27t.iap2ani2n2k7zkjw
	I0120 14:05:21.745349 1060798 out.go:235]   - Configuring RBAC rules ...
	I0120 14:05:21.745503 1060798 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:05:21.754153 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:05:21.765952 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:05:21.771799 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:05:21.779054 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:05:21.785557 1060798 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:05:22.071797 1060798 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:05:22.539495 1060798 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:05:23.070019 1060798 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:05:23.071157 1060798 kubeadm.go:310] 
	I0120 14:05:23.071304 1060798 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:05:23.071330 1060798 kubeadm.go:310] 
	I0120 14:05:23.071427 1060798 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:05:23.071438 1060798 kubeadm.go:310] 
	I0120 14:05:23.071470 1060798 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:05:23.071548 1060798 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:05:23.071621 1060798 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:05:23.071631 1060798 kubeadm.go:310] 
	I0120 14:05:23.071735 1060798 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:05:23.071777 1060798 kubeadm.go:310] 
	I0120 14:05:23.071865 1060798 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:05:23.071878 1060798 kubeadm.go:310] 
	I0120 14:05:23.071948 1060798 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:05:23.072051 1060798 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:05:23.072144 1060798 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:05:23.072164 1060798 kubeadm.go:310] 
	I0120 14:05:23.072309 1060798 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:05:23.072412 1060798 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:05:23.072423 1060798 kubeadm.go:310] 
	I0120 14:05:23.072537 1060798 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
	I0120 14:05:23.072690 1060798 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114 \
	I0120 14:05:23.072722 1060798 kubeadm.go:310] 	--control-plane 
	I0120 14:05:23.072736 1060798 kubeadm.go:310] 
	I0120 14:05:23.072848 1060798 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:05:23.072867 1060798 kubeadm.go:310] 
	I0120 14:05:23.072985 1060798 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
	I0120 14:05:23.073167 1060798 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114 
	I0120 14:05:23.075375 1060798 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:05:23.075417 1060798 cni.go:84] Creating CNI manager for ""
	I0120 14:05:23.075445 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:05:23.077601 1060798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:23.079121 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:23.091937 1060798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:23.116874 1060798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:23.116939 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:23.116978 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-553677 minikube.k8s.io/updated_at=2025_01_20T14_05_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-553677 minikube.k8s.io/primary=true
	I0120 14:05:23.148895 1060798 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:23.378558 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:23.879347 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:24.379349 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:24.879187 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:25.379285 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:25.879105 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:26.379133 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:26.478857 1060798 kubeadm.go:1113] duration metric: took 3.36197683s to wait for elevateKubeSystemPrivileges
	I0120 14:05:26.478907 1060798 kubeadm.go:394] duration metric: took 4m36.924060891s to StartCluster
	I0120 14:05:26.478935 1060798 settings.go:142] acquiring lock: {Name:mked7f2376b8a06c64dcfd911ab4b0d95ecdbe2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:26.479036 1060798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:05:26.481214 1060798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:26.481626 1060798 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 14:05:26.481760 1060798 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:26.481876 1060798 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-553677"
	I0120 14:05:26.481896 1060798 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-553677"
	W0120 14:05:26.481905 1060798 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:26.481906 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:05:26.481916 1060798 addons.go:69] Setting default-storageclass=true in profile "embed-certs-553677"
	I0120 14:05:26.481942 1060798 addons.go:69] Setting metrics-server=true in profile "embed-certs-553677"
	I0120 14:05:26.481958 1060798 addons.go:238] Setting addon metrics-server=true in "embed-certs-553677"
	W0120 14:05:26.481970 1060798 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:26.481989 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.481957 1060798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-553677"
	I0120 14:05:26.481936 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.482431 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.482468 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.481939 1060798 addons.go:69] Setting dashboard=true in profile "embed-certs-553677"
	I0120 14:05:26.482542 1060798 addons.go:238] Setting addon dashboard=true in "embed-certs-553677"
	W0120 14:05:26.482554 1060798 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:26.482556 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.482578 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.482592 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.482543 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.482710 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.482972 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.483025 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.483426 1060798 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:26.485000 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:26.503670 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
	I0120 14:05:26.503915 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I0120 14:05:26.503956 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0120 14:05:26.504290 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.504434 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.505146 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.505154 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.505171 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.505175 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.505608 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.505613 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.505894 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.506345 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.506391 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.506479 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.506502 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.506645 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.506751 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.507010 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.507160 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0120 14:05:26.507428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.507754 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.508311 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.508336 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.508797 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.509512 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.509563 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.512304 1060798 addons.go:238] Setting addon default-storageclass=true in "embed-certs-553677"
	W0120 14:05:26.512327 1060798 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:26.512357 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.512623 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.512672 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.529326 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0120 14:05:26.530030 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.530626 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.530648 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.530699 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I0120 14:05:26.530970 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0120 14:05:26.531055 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.531380 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.531456 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.531589 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.531641 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.531661 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.532129 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.532156 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.532234 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.532425 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.532428 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0120 14:05:26.532828 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.532931 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.533311 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.535196 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.535230 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.535639 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.536245 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.536293 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.537777 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.538423 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.538544 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.540631 1060798 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:26.540639 1060798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:26.540707 1060798 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:26.541975 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:26.541997 1060798 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:26.542019 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.542075 1060798 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:26.542094 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:26.542115 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.544926 1060798 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:26.546368 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:26.546392 1060798 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:26.546418 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.549578 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.549713 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.553664 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.553690 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.553947 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.554117 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.554221 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.554305 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.554626 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.554889 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.554914 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.555102 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.555168 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.555182 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.555284 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.555340 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.555596 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.555691 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.555715 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.555883 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.556015 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.560724 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0120 14:05:26.561235 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.561723 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.561738 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.562059 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.562297 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.564026 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.564278 1060798 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:26.564290 1060798 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:26.564304 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.567858 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.568393 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.568433 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.568556 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.568742 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.568910 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.569124 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.773077 1060798 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:26.800362 1060798 node_ready.go:35] waiting up to 6m0s for node "embed-certs-553677" to be "Ready" ...
	I0120 14:05:26.843740 1060798 node_ready.go:49] node "embed-certs-553677" has status "Ready":"True"
	I0120 14:05:26.843780 1060798 node_ready.go:38] duration metric: took 43.372924ms for node "embed-certs-553677" to be "Ready" ...
	I0120 14:05:26.843796 1060798 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:26.873119 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:26.873149 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:26.874981 1060798 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:26.906789 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:26.940145 1060798 pod_ready.go:93] pod "etcd-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:26.940190 1060798 pod_ready.go:82] duration metric: took 65.181123ms for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:26.940211 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:26.969325 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:26.969365 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:26.969405 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:26.989583 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:26.989615 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:27.153235 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:27.153271 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:27.177818 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:27.177844 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:27.342345 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:27.342379 1060798 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:27.474579 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:27.474615 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:27.480859 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:27.583861 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:27.583897 1060798 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:27.625368 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:27.625405 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:27.625755 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:27.625774 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:27.625784 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:27.625792 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:27.626090 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:27.626113 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:27.626136 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:27.642156 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:27.642194 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:27.642522 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:27.642553 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:27.884652 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:27.884699 1060798 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:28.031119 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:28.031155 1060798 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:28.145159 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:28.145199 1060798 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:28.273725 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:28.273765 1060798 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:28.506539 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:28.887655 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.918209178s)
	I0120 14:05:28.887715 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:28.887730 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:28.888066 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:28.888078 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:28.888089 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:28.888098 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:28.889637 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:28.889660 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:28.889672 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:28.971702 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:29.421863 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.940948518s)
	I0120 14:05:29.421940 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:29.421960 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:29.422340 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:29.422359 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:29.422381 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:29.422399 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:29.422412 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:29.422673 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:29.422690 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:29.422702 1060798 addons.go:479] Verifying addon metrics-server=true in "embed-certs-553677"
	I0120 14:05:29.422725 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:30.228977 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.722367434s)
	I0120 14:05:30.229039 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:30.229056 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:30.229398 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:30.229421 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:30.229431 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:30.229439 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:30.229692 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:30.229713 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:30.231477 1060798 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-553677 addons enable metrics-server
	
	I0120 14:05:30.233108 1060798 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:30.234556 1060798 addons.go:514] duration metric: took 3.752807641s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:31.446192 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:33.453220 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:35.447702 1060798 pod_ready.go:93] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.447735 1060798 pod_ready.go:82] duration metric: took 8.507515045s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.447745 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.453130 1060798 pod_ready.go:93] pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.453158 1060798 pod_ready.go:82] duration metric: took 5.406746ms for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.453169 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.457466 1060798 pod_ready.go:93] pod "kube-proxy-p5rcq" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.457492 1060798 pod_ready.go:82] duration metric: took 4.316578ms for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.457503 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.462012 1060798 pod_ready.go:93] pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.462036 1060798 pod_ready.go:82] duration metric: took 4.526901ms for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.462043 1060798 pod_ready.go:39] duration metric: took 8.61823381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:35.462058 1060798 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:35.462111 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:35.477958 1060798 api_server.go:72] duration metric: took 8.996279799s to wait for apiserver process to appear ...
	I0120 14:05:35.477993 1060798 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:35.478019 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:05:35.483505 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 200:
	ok
	I0120 14:05:35.484660 1060798 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:35.484690 1060798 api_server.go:131] duration metric: took 6.687782ms to wait for apiserver health ...
	I0120 14:05:35.484701 1060798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:35.490073 1060798 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:35.490118 1060798 system_pods.go:61] "coredns-668d6bf9bc-6dk7s" [1bba3148-0210-42ef-b08e-753e16365e33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:35.490129 1060798 system_pods.go:61] "coredns-668d6bf9bc-88phd" [dfc4947e-a505-4337-99d3-156d86f7646c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:35.490137 1060798 system_pods.go:61] "etcd-embed-certs-553677" [c915afbe-8665-4fbf-bcae-802c3ca214dd] Running
	I0120 14:05:35.490143 1060798 system_pods.go:61] "kube-apiserver-embed-certs-553677" [d04063fb-d723-4a72-9024-0b6ceba0f09d] Running
	I0120 14:05:35.490149 1060798 system_pods.go:61] "kube-controller-manager-embed-certs-553677" [c6de6703-1533-4391-a67e-f2c2208ebafe] Running
	I0120 14:05:35.490153 1060798 system_pods.go:61] "kube-proxy-p5rcq" [3a9ddae1-ef67-4dd0-9c18-77e796c37d2a] Running
	I0120 14:05:35.490157 1060798 system_pods.go:61] "kube-scheduler-embed-certs-553677" [10c63c3f-0748-4af6-94fb-a0ca644d4c61] Running
	I0120 14:05:35.490164 1060798 system_pods.go:61] "metrics-server-f79f97bbb-b92sv" [f9b310a6-0d19-4084-aeae-ebe0a395d042] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:35.490170 1060798 system_pods.go:61] "storage-provisioner" [a6c0070e-1e3c-48af-80e3-1c3ca9163bf8] Running
	I0120 14:05:35.490179 1060798 system_pods.go:74] duration metric: took 5.471078ms to wait for pod list to return data ...
	I0120 14:05:35.490189 1060798 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:35.493453 1060798 default_sa.go:45] found service account: "default"
	I0120 14:05:35.493489 1060798 default_sa.go:55] duration metric: took 3.2839ms for default service account to be created ...
	I0120 14:05:35.493500 1060798 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:35.648514 1060798 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-553677 -n embed-certs-553677
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-553677 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-553677 logs -n 25: (1.421405846s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | old-k8s-version-743378 image                           | old-k8s-version-743378       | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
	|         | list --format=json                                     |                              |         |         |                     |                     |
	| pause   | -p old-k8s-version-743378                              | old-k8s-version-743378       | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p old-k8s-version-743378                              | old-k8s-version-743378       | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-743378                              | old-k8s-version-743378       | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
	| delete  | -p old-k8s-version-743378                              | old-k8s-version-743378       | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
	| start   | -p newest-cni-488874 --memory=2200 --alsologtostderr   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-488874             | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-488874                                   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-488874                  | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-488874 --memory=2200 --alsologtostderr   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| image   | no-preload-097312 image list                           | no-preload-097312            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-097312                                   | no-preload-097312            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-097312                                   | no-preload-097312            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-097312                                   | no-preload-097312            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	| delete  | -p no-preload-097312                                   | no-preload-097312            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	| image   | newest-cni-488874 image list                           | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-488874                                   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-488874                                   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-488874                                   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	| delete  | -p newest-cni-488874                                   | newest-cni-488874            | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
	| image   | default-k8s-diff-port-901416                           | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
	|         | default-k8s-diff-port-901416                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
	|         | default-k8s-diff-port-901416                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
	|         | default-k8s-diff-port-901416                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
	|         | default-k8s-diff-port-901416                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:04:47
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:04:47.050101 1063160 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:04:47.050227 1063160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:04:47.050232 1063160 out.go:358] Setting ErrFile to fd 2...
	I0120 14:04:47.050237 1063160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:04:47.050499 1063160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 14:04:47.051203 1063160 out.go:352] Setting JSON to false
	I0120 14:04:47.052449 1063160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13629,"bootTime":1737368258,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:04:47.052579 1063160 start.go:139] virtualization: kvm guest
	I0120 14:04:47.055235 1063160 out.go:177] * [newest-cni-488874] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:04:47.056951 1063160 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:04:47.056934 1063160 notify.go:220] Checking for updates...
	I0120 14:04:47.058630 1063160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:04:47.060396 1063160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:04:47.061968 1063160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 14:04:47.063408 1063160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:04:47.064917 1063160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:04:47.066668 1063160 config.go:182] Loaded profile config "newest-cni-488874": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:04:47.067111 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:04:47.067182 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:47.083702 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37981
	I0120 14:04:47.084272 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:47.084954 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:04:47.084998 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:47.085439 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:47.085687 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:04:47.086006 1063160 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:04:47.086434 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:04:47.086492 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:47.103220 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0120 14:04:47.103721 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:47.104507 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:04:47.104547 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:47.104876 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:47.105165 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:04:47.143032 1063160 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:04:47.144670 1063160 start.go:297] selected driver: kvm2
	I0120 14:04:47.144697 1063160 start.go:901] validating driver "kvm2" against &{Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:04:47.144885 1063160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:04:47.145958 1063160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:04:47.146076 1063160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-998973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:04:47.162250 1063160 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:04:47.162842 1063160 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0120 14:04:47.162911 1063160 cni.go:84] Creating CNI manager for ""
	I0120 14:04:47.162986 1063160 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:04:47.163055 1063160 start.go:340] cluster config:
	{Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:04:47.163221 1063160 iso.go:125] acquiring lock: {Name:mk63965bcac7e5d2166c667dd03e4270f636bd53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:04:47.165513 1063160 out.go:177] * Starting "newest-cni-488874" primary control-plane node in "newest-cni-488874" cluster
	I0120 14:04:47.167021 1063160 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 14:04:47.167079 1063160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 14:04:47.167105 1063160 cache.go:56] Caching tarball of preloaded images
	I0120 14:04:47.167264 1063160 preload.go:172] Found /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0120 14:04:47.167288 1063160 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 14:04:47.167435 1063160 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/config.json ...
	I0120 14:04:47.167717 1063160 start.go:360] acquireMachinesLock for newest-cni-488874: {Name:mk36ae0f7b2d42a8734a6403f72836860fc4ccfa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:04:47.167792 1063160 start.go:364] duration metric: took 47.776µs to acquireMachinesLock for "newest-cni-488874"
	I0120 14:04:47.167814 1063160 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:04:47.167822 1063160 fix.go:54] fixHost starting: 
	I0120 14:04:47.168125 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:04:47.168164 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:47.183549 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0120 14:04:47.184104 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:47.184711 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:04:47.184744 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:47.185155 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:47.185366 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:04:47.185574 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
	I0120 14:04:47.187388 1063160 fix.go:112] recreateIfNeeded on newest-cni-488874: state=Stopped err=<nil>
	I0120 14:04:47.187412 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	W0120 14:04:47.187603 1063160 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:04:47.189877 1063160 out.go:177] * Restarting existing kvm2 VM for "newest-cni-488874" ...
	I0120 14:04:45.425579 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.926756 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:46.868852 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.870219 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:46.915776 1060619 pod_ready.go:103] pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.916552 1060619 pod_ready.go:103] pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:51.415545 1060619 pod_ready.go:103] pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.191455 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Start
	I0120 14:04:47.191771 1063160 main.go:141] libmachine: (newest-cni-488874) starting domain...
	I0120 14:04:47.191797 1063160 main.go:141] libmachine: (newest-cni-488874) ensuring networks are active...
	I0120 14:04:47.192792 1063160 main.go:141] libmachine: (newest-cni-488874) Ensuring network default is active
	I0120 14:04:47.193160 1063160 main.go:141] libmachine: (newest-cni-488874) Ensuring network mk-newest-cni-488874 is active
	I0120 14:04:47.193642 1063160 main.go:141] libmachine: (newest-cni-488874) getting domain XML...
	I0120 14:04:47.194500 1063160 main.go:141] libmachine: (newest-cni-488874) creating domain...
	I0120 14:04:48.526775 1063160 main.go:141] libmachine: (newest-cni-488874) waiting for IP...
	I0120 14:04:48.527710 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:48.528359 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:48.528470 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:48.528313 1063195 retry.go:31] will retry after 228.063414ms: waiting for domain to come up
	I0120 14:04:48.757843 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:48.758439 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:48.758480 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:48.758409 1063195 retry.go:31] will retry after 375.398282ms: waiting for domain to come up
	I0120 14:04:49.135078 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:49.135653 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:49.135704 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:49.135605 1063195 retry.go:31] will retry after 439.758196ms: waiting for domain to come up
	I0120 14:04:49.577514 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:49.578119 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:49.578170 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:49.578078 1063195 retry.go:31] will retry after 456.356276ms: waiting for domain to come up
	I0120 14:04:50.035835 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:50.036421 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:50.036455 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:50.036381 1063195 retry.go:31] will retry after 602.99846ms: waiting for domain to come up
	I0120 14:04:50.641379 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:50.642024 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:50.642052 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:50.641984 1063195 retry.go:31] will retry after 929.982744ms: waiting for domain to come up
	I0120 14:04:51.573106 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:51.573644 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:51.573676 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:51.573578 1063195 retry.go:31] will retry after 800.371471ms: waiting for domain to come up
	I0120 14:04:50.427277 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.925532 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:51.369069 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:53.369540 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:51.914831 1060619 pod_ready.go:82] duration metric: took 4m0.007391522s for pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:51.914867 1060619 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:04:51.914878 1060619 pod_ready.go:39] duration metric: took 4m7.421521073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:51.914899 1060619 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:04:51.914936 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:51.915002 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:51.972482 1060619 cri.go:89] found id: "7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
	I0120 14:04:51.972517 1060619 cri.go:89] found id: "02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
	I0120 14:04:51.972524 1060619 cri.go:89] found id: ""
	I0120 14:04:51.972535 1060619 logs.go:282] 2 containers: [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad]
	I0120 14:04:51.972606 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:51.978179 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:51.987282 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:04:51.987420 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:52.032979 1060619 cri.go:89] found id: "9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
	I0120 14:04:52.033017 1060619 cri.go:89] found id: "55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
	I0120 14:04:52.033024 1060619 cri.go:89] found id: ""
	I0120 14:04:52.033035 1060619 logs.go:282] 2 containers: [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512]
	I0120 14:04:52.033107 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.039652 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.044848 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:04:52.044932 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:52.096249 1060619 cri.go:89] found id: "adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
	I0120 14:04:52.096283 1060619 cri.go:89] found id: "41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
	I0120 14:04:52.096289 1060619 cri.go:89] found id: ""
	I0120 14:04:52.096300 1060619 logs.go:282] 2 containers: [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246]
	I0120 14:04:52.096369 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.101358 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.106095 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:52.106169 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:52.154285 1060619 cri.go:89] found id: "fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
	I0120 14:04:52.154318 1060619 cri.go:89] found id: "b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
	I0120 14:04:52.154323 1060619 cri.go:89] found id: ""
	I0120 14:04:52.154331 1060619 logs.go:282] 2 containers: [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a]
	I0120 14:04:52.154382 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.159475 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.164277 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:52.164353 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:52.204626 1060619 cri.go:89] found id: "d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
	I0120 14:04:52.204657 1060619 cri.go:89] found id: "690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
	I0120 14:04:52.204663 1060619 cri.go:89] found id: ""
	I0120 14:04:52.204674 1060619 logs.go:282] 2 containers: [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527]
	I0120 14:04:52.204736 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.209519 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.213820 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:52.213885 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:52.257332 1060619 cri.go:89] found id: "68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
	I0120 14:04:52.257364 1060619 cri.go:89] found id: "72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
	I0120 14:04:52.257371 1060619 cri.go:89] found id: ""
	I0120 14:04:52.257382 1060619 logs.go:282] 2 containers: [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67]
	I0120 14:04:52.257446 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.263188 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.269822 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:52.269897 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:52.312509 1060619 cri.go:89] found id: ""
	I0120 14:04:52.312539 1060619 logs.go:282] 0 containers: []
	W0120 14:04:52.312548 1060619 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:52.312562 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:52.312618 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:52.360717 1060619 cri.go:89] found id: "19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
	I0120 14:04:52.360745 1060619 cri.go:89] found id: ""
	I0120 14:04:52.360756 1060619 logs.go:282] 1 containers: [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a]
	I0120 14:04:52.360832 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.366217 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:04:52.366308 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:04:52.415084 1060619 cri.go:89] found id: "d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
	I0120 14:04:52.415123 1060619 cri.go:89] found id: "1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
	I0120 14:04:52.415129 1060619 cri.go:89] found id: ""
	I0120 14:04:52.415140 1060619 logs.go:282] 2 containers: [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416]
	I0120 14:04:52.415218 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.419894 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:52.424668 1060619 logs.go:123] Gathering logs for kube-controller-manager [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787] ...
	I0120 14:04:52.424696 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
	I0120 14:04:52.489085 1060619 logs.go:123] Gathering logs for kubernetes-dashboard [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a] ...
	I0120 14:04:52.489131 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
	I0120 14:04:52.536894 1060619 logs.go:123] Gathering logs for kube-scheduler [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807] ...
	I0120 14:04:52.536937 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
	I0120 14:04:52.577327 1060619 logs.go:123] Gathering logs for kube-scheduler [b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a] ...
	I0120 14:04:52.577371 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
	I0120 14:04:52.635187 1060619 logs.go:123] Gathering logs for kube-proxy [690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527] ...
	I0120 14:04:52.635246 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
	I0120 14:04:52.678528 1060619 logs.go:123] Gathering logs for containerd ...
	I0120 14:04:52.678570 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:04:52.739780 1060619 logs.go:123] Gathering logs for container status ...
	I0120 14:04:52.739830 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:52.791166 1060619 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:52.791233 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:04:52.961331 1060619 logs.go:123] Gathering logs for etcd [55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512] ...
	I0120 14:04:52.961376 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
	I0120 14:04:53.045232 1060619 logs.go:123] Gathering logs for storage-provisioner [1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416] ...
	I0120 14:04:53.045281 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
	I0120 14:04:53.093889 1060619 logs.go:123] Gathering logs for kube-controller-manager [72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67] ...
	I0120 14:04:53.093950 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
	I0120 14:04:53.174518 1060619 logs.go:123] Gathering logs for storage-provisioner [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc] ...
	I0120 14:04:53.174565 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
	I0120 14:04:53.221380 1060619 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:53.221424 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:53.303548 1060619 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:53.303629 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:53.319656 1060619 logs.go:123] Gathering logs for coredns [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316] ...
	I0120 14:04:53.319700 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
	I0120 14:04:53.363932 1060619 logs.go:123] Gathering logs for coredns [41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246] ...
	I0120 14:04:53.363976 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
	I0120 14:04:53.425306 1060619 logs.go:123] Gathering logs for kube-proxy [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332] ...
	I0120 14:04:53.425353 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
	I0120 14:04:53.479186 1060619 logs.go:123] Gathering logs for kube-apiserver [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31] ...
	I0120 14:04:53.479230 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
	I0120 14:04:53.537133 1060619 logs.go:123] Gathering logs for kube-apiserver [02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad] ...
	I0120 14:04:53.537190 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
	I0120 14:04:53.587036 1060619 logs.go:123] Gathering logs for etcd [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b] ...
	I0120 14:04:53.587082 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
	I0120 14:04:56.146948 1060619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:56.165984 1060619 api_server.go:72] duration metric: took 4m18.967999913s to wait for apiserver process to appear ...
	I0120 14:04:56.166016 1060619 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:04:56.166056 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:56.166126 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:56.216149 1060619 cri.go:89] found id: "7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
	I0120 14:04:56.216180 1060619 cri.go:89] found id: "02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
	I0120 14:04:56.216185 1060619 cri.go:89] found id: ""
	I0120 14:04:56.216195 1060619 logs.go:282] 2 containers: [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad]
	I0120 14:04:56.216261 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.221620 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.227539 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:04:56.227642 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:56.271909 1060619 cri.go:89] found id: "9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
	I0120 14:04:56.271946 1060619 cri.go:89] found id: "55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
	I0120 14:04:56.271952 1060619 cri.go:89] found id: ""
	I0120 14:04:56.271964 1060619 logs.go:282] 2 containers: [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512]
	I0120 14:04:56.272035 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.278155 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.283955 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:04:56.284047 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:56.328236 1060619 cri.go:89] found id: "adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
	I0120 14:04:56.328271 1060619 cri.go:89] found id: "41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
	I0120 14:04:56.328277 1060619 cri.go:89] found id: ""
	I0120 14:04:56.328288 1060619 logs.go:282] 2 containers: [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246]
	I0120 14:04:56.328364 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.334015 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.339913 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:56.340003 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:56.393554 1060619 cri.go:89] found id: "fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
	I0120 14:04:56.393592 1060619 cri.go:89] found id: "b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
	I0120 14:04:56.393600 1060619 cri.go:89] found id: ""
	I0120 14:04:56.393612 1060619 logs.go:282] 2 containers: [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a]
	I0120 14:04:56.393685 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.400490 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.407736 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:56.407844 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:52.375493 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:52.376103 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:52.376133 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:52.376063 1063195 retry.go:31] will retry after 1.091722591s: waiting for domain to come up
	I0120 14:04:53.469641 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:53.470320 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:53.470350 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:53.470265 1063195 retry.go:31] will retry after 1.304505368s: waiting for domain to come up
	I0120 14:04:54.776482 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:54.777187 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:54.777216 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:54.777099 1063195 retry.go:31] will retry after 1.932003229s: waiting for domain to come up
	I0120 14:04:56.711489 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:56.712094 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:56.712128 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:56.712033 1063195 retry.go:31] will retry after 1.877119762s: waiting for domain to come up
	I0120 14:04:54.926430 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:57.426323 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.868690 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:57.869554 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.471585 1060619 cri.go:89] found id: "d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
	I0120 14:04:56.471616 1060619 cri.go:89] found id: "690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
	I0120 14:04:56.471622 1060619 cri.go:89] found id: ""
	I0120 14:04:56.471633 1060619 logs.go:282] 2 containers: [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527]
	I0120 14:04:56.471707 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.477704 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.483023 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:56.483126 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:56.544017 1060619 cri.go:89] found id: "68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
	I0120 14:04:56.544046 1060619 cri.go:89] found id: "72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
	I0120 14:04:56.544053 1060619 cri.go:89] found id: ""
	I0120 14:04:56.544063 1060619 logs.go:282] 2 containers: [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67]
	I0120 14:04:56.544136 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.548798 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.554021 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:56.554093 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:56.604780 1060619 cri.go:89] found id: ""
	I0120 14:04:56.604824 1060619 logs.go:282] 0 containers: []
	W0120 14:04:56.604837 1060619 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:56.604845 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:56.604922 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:56.646325 1060619 cri.go:89] found id: "19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
	I0120 14:04:56.646359 1060619 cri.go:89] found id: ""
	I0120 14:04:56.646371 1060619 logs.go:282] 1 containers: [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a]
	I0120 14:04:56.646439 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.651126 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:04:56.651234 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:04:56.694400 1060619 cri.go:89] found id: "d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
	I0120 14:04:56.694443 1060619 cri.go:89] found id: "1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
	I0120 14:04:56.694449 1060619 cri.go:89] found id: ""
	I0120 14:04:56.694459 1060619 logs.go:282] 2 containers: [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416]
	I0120 14:04:56.694539 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.701264 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:04:56.707843 1060619 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:56.707878 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:56.810155 1060619 logs.go:123] Gathering logs for etcd [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b] ...
	I0120 14:04:56.810208 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
	I0120 14:04:56.878486 1060619 logs.go:123] Gathering logs for etcd [55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512] ...
	I0120 14:04:56.878584 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
	I0120 14:04:56.984323 1060619 logs.go:123] Gathering logs for kube-scheduler [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807] ...
	I0120 14:04:56.984370 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
	I0120 14:04:57.030429 1060619 logs.go:123] Gathering logs for kube-proxy [690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527] ...
	I0120 14:04:57.030485 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
	I0120 14:04:57.075957 1060619 logs.go:123] Gathering logs for kube-controller-manager [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787] ...
	I0120 14:04:57.076008 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
	I0120 14:04:57.151785 1060619 logs.go:123] Gathering logs for storage-provisioner [1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416] ...
	I0120 14:04:57.151851 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
	I0120 14:04:57.200132 1060619 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:57.200178 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:57.221442 1060619 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:57.221495 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:04:57.353366 1060619 logs.go:123] Gathering logs for kube-apiserver [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31] ...
	I0120 14:04:57.353421 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
	I0120 14:04:57.427690 1060619 logs.go:123] Gathering logs for kube-apiserver [02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad] ...
	I0120 14:04:57.427726 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
	I0120 14:04:57.502048 1060619 logs.go:123] Gathering logs for kube-controller-manager [72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67] ...
	I0120 14:04:57.502097 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
	I0120 14:04:57.566324 1060619 logs.go:123] Gathering logs for kubernetes-dashboard [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a] ...
	I0120 14:04:57.566369 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
	I0120 14:04:57.614013 1060619 logs.go:123] Gathering logs for coredns [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316] ...
	I0120 14:04:57.614063 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
	I0120 14:04:57.671629 1060619 logs.go:123] Gathering logs for coredns [41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246] ...
	I0120 14:04:57.671670 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
	I0120 14:04:57.733137 1060619 logs.go:123] Gathering logs for kube-scheduler [b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a] ...
	I0120 14:04:57.733192 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
	I0120 14:04:57.795230 1060619 logs.go:123] Gathering logs for kube-proxy [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332] ...
	I0120 14:04:57.795287 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
	I0120 14:04:57.850704 1060619 logs.go:123] Gathering logs for storage-provisioner [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc] ...
	I0120 14:04:57.850745 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
	I0120 14:04:57.913118 1060619 logs.go:123] Gathering logs for containerd ...
	I0120 14:04:57.913164 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:04:57.987033 1060619 logs.go:123] Gathering logs for container status ...
	I0120 14:04:57.987081 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:00.546303 1060619 api_server.go:253] Checking apiserver healthz at https://192.168.61.149:8443/healthz ...
	I0120 14:05:00.555978 1060619 api_server.go:279] https://192.168.61.149:8443/healthz returned 200:
	ok
	I0120 14:05:00.557505 1060619 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:00.557538 1060619 api_server.go:131] duration metric: took 4.391514556s to wait for apiserver health ...
	I0120 14:05:00.557550 1060619 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:00.557582 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:00.557652 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:00.619715 1060619 cri.go:89] found id: "7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
	I0120 14:05:00.619751 1060619 cri.go:89] found id: "02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
	I0120 14:05:00.619758 1060619 cri.go:89] found id: ""
	I0120 14:05:00.619771 1060619 logs.go:282] 2 containers: [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad]
	I0120 14:05:00.619848 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.624825 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.629551 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:05:00.629633 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:00.674890 1060619 cri.go:89] found id: "9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
	I0120 14:05:00.674937 1060619 cri.go:89] found id: "55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
	I0120 14:05:00.674944 1060619 cri.go:89] found id: ""
	I0120 14:05:00.674956 1060619 logs.go:282] 2 containers: [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512]
	I0120 14:05:00.675029 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.680286 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.685334 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:05:00.685431 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:00.729647 1060619 cri.go:89] found id: "adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
	I0120 14:05:00.729678 1060619 cri.go:89] found id: "41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
	I0120 14:05:00.729684 1060619 cri.go:89] found id: ""
	I0120 14:05:00.729694 1060619 logs.go:282] 2 containers: [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246]
	I0120 14:05:00.729766 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.734865 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.740340 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:00.740429 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:00.799061 1060619 cri.go:89] found id: "fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
	I0120 14:05:00.799094 1060619 cri.go:89] found id: "b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
	I0120 14:05:00.799101 1060619 cri.go:89] found id: ""
	I0120 14:05:00.799111 1060619 logs.go:282] 2 containers: [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a]
	I0120 14:05:00.799192 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.803902 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.808273 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:00.808346 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:00.852747 1060619 cri.go:89] found id: "d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
	I0120 14:05:00.852784 1060619 cri.go:89] found id: "690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
	I0120 14:05:00.852790 1060619 cri.go:89] found id: ""
	I0120 14:05:00.852803 1060619 logs.go:282] 2 containers: [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527]
	I0120 14:05:00.852872 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.858346 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.863202 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:00.863279 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:00.907450 1060619 cri.go:89] found id: "68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
	I0120 14:05:00.907474 1060619 cri.go:89] found id: "72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
	I0120 14:05:00.907478 1060619 cri.go:89] found id: ""
	I0120 14:05:00.907486 1060619 logs.go:282] 2 containers: [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67]
	I0120 14:05:00.907542 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.912507 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:00.917120 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:00.917216 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:00.959792 1060619 cri.go:89] found id: ""
	I0120 14:05:00.959828 1060619 logs.go:282] 0 containers: []
	W0120 14:05:00.959840 1060619 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:00.959848 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:00.959923 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:00.999755 1060619 cri.go:89] found id: "19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
	I0120 14:05:00.999785 1060619 cri.go:89] found id: ""
	I0120 14:05:00.999794 1060619 logs.go:282] 1 containers: [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a]
	I0120 14:05:00.999845 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:01.004371 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:05:01.004466 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:05:01.044946 1060619 cri.go:89] found id: "d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
	I0120 14:05:01.044990 1060619 cri.go:89] found id: "1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
	I0120 14:05:01.044997 1060619 cri.go:89] found id: ""
	I0120 14:05:01.045007 1060619 logs.go:282] 2 containers: [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416]
	I0120 14:05:01.045068 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:01.050246 1060619 ssh_runner.go:195] Run: which crictl
	I0120 14:05:01.055164 1060619 logs.go:123] Gathering logs for etcd [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b] ...
	I0120 14:05:01.055200 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
	I0120 14:05:01.108108 1060619 logs.go:123] Gathering logs for coredns [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316] ...
	I0120 14:05:01.108153 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
	I0120 14:05:01.155209 1060619 logs.go:123] Gathering logs for kube-controller-manager [72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67] ...
	I0120 14:05:01.155242 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
	I0120 14:05:01.208141 1060619 logs.go:123] Gathering logs for container status ...
	I0120 14:05:01.208187 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:01.257057 1060619 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:01.257095 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:01.271460 1060619 logs.go:123] Gathering logs for kubernetes-dashboard [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a] ...
	I0120 14:05:01.271495 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
	I0120 14:05:01.315984 1060619 logs.go:123] Gathering logs for containerd ...
	I0120 14:05:01.316031 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:05:01.375729 1060619 logs.go:123] Gathering logs for kube-controller-manager [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787] ...
	I0120 14:05:01.375778 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
	I0120 14:04:58.591226 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:04:58.591819 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:04:58.591904 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:58.591790 1063195 retry.go:31] will retry after 3.366177049s: waiting for domain to come up
	I0120 14:05:01.962611 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:01.963313 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
	I0120 14:05:01.963381 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:05:01.963271 1063195 retry.go:31] will retry after 4.39777174s: waiting for domain to come up
	I0120 14:04:59.926968 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:02.425700 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:01.442435 1060619 logs.go:123] Gathering logs for storage-provisioner [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc] ...
	I0120 14:05:01.442489 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
	I0120 14:05:01.498316 1060619 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:01.498358 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:01.576794 1060619 logs.go:123] Gathering logs for kube-apiserver [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31] ...
	I0120 14:05:01.576853 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
	I0120 14:05:01.628660 1060619 logs.go:123] Gathering logs for kube-apiserver [02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad] ...
	I0120 14:05:01.628701 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
	I0120 14:05:01.676023 1060619 logs.go:123] Gathering logs for etcd [55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512] ...
	I0120 14:05:01.676066 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
	I0120 14:05:01.760456 1060619 logs.go:123] Gathering logs for kube-proxy [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332] ...
	I0120 14:05:01.760505 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
	I0120 14:05:01.808639 1060619 logs.go:123] Gathering logs for storage-provisioner [1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416] ...
	I0120 14:05:01.808679 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
	I0120 14:05:01.851560 1060619 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:01.851608 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:05:01.974027 1060619 logs.go:123] Gathering logs for coredns [41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246] ...
	I0120 14:05:01.974068 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
	I0120 14:05:02.028243 1060619 logs.go:123] Gathering logs for kube-scheduler [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807] ...
	I0120 14:05:02.028282 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
	I0120 14:05:02.072145 1060619 logs.go:123] Gathering logs for kube-scheduler [b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a] ...
	I0120 14:05:02.072184 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
	I0120 14:05:02.132398 1060619 logs.go:123] Gathering logs for kube-proxy [690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527] ...
	I0120 14:05:02.132439 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
	I0120 14:05:04.688443 1060619 system_pods.go:59] 8 kube-system pods found
	I0120 14:05:04.688485 1060619 system_pods.go:61] "coredns-668d6bf9bc-n6s85" [69154ea8-b8a0-4320-827b-616277a36df3] Running
	I0120 14:05:04.688490 1060619 system_pods.go:61] "etcd-no-preload-097312" [1f5692ac-d9be-42f7-bfbb-2bbf06b63811] Running
	I0120 14:05:04.688493 1060619 system_pods.go:61] "kube-apiserver-no-preload-097312" [6794a44a-ccbb-4242-819e-27b02589ca1a] Running
	I0120 14:05:04.688497 1060619 system_pods.go:61] "kube-controller-manager-no-preload-097312" [272771b0-de01-49a8-902c-fffa5e478bdf] Running
	I0120 14:05:04.688500 1060619 system_pods.go:61] "kube-proxy-xnklt" [5a439af8-d69e-40b5-aa33-b04adf773d1f] Running
	I0120 14:05:04.688503 1060619 system_pods.go:61] "kube-scheduler-no-preload-097312" [10717848-0d1d-4f1d-9c31-07956ac756db] Running
	I0120 14:05:04.688510 1060619 system_pods.go:61] "metrics-server-f79f97bbb-4wzdk" [f224006c-6882-455d-b3e6-45c1a34c5748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:04.688514 1060619 system_pods.go:61] "storage-provisioner" [a862a893-ccaf-45fb-a349-98804054f044] Running
	I0120 14:05:04.688522 1060619 system_pods.go:74] duration metric: took 4.130964895s to wait for pod list to return data ...
	I0120 14:05:04.688529 1060619 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:04.691965 1060619 default_sa.go:45] found service account: "default"
	I0120 14:05:04.691998 1060619 default_sa.go:55] duration metric: took 3.462513ms for default service account to be created ...
	I0120 14:05:04.692009 1060619 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:04.697430 1060619 system_pods.go:87] 8 kube-system pods found
	I0120 14:05:04.700108 1060619 system_pods.go:105] "coredns-668d6bf9bc-n6s85" [69154ea8-b8a0-4320-827b-616277a36df3] Running
	I0120 14:05:04.700127 1060619 system_pods.go:105] "etcd-no-preload-097312" [1f5692ac-d9be-42f7-bfbb-2bbf06b63811] Running
	I0120 14:05:04.700134 1060619 system_pods.go:105] "kube-apiserver-no-preload-097312" [6794a44a-ccbb-4242-819e-27b02589ca1a] Running
	I0120 14:05:04.700139 1060619 system_pods.go:105] "kube-controller-manager-no-preload-097312" [272771b0-de01-49a8-902c-fffa5e478bdf] Running
	I0120 14:05:04.700143 1060619 system_pods.go:105] "kube-proxy-xnklt" [5a439af8-d69e-40b5-aa33-b04adf773d1f] Running
	I0120 14:05:04.700148 1060619 system_pods.go:105] "kube-scheduler-no-preload-097312" [10717848-0d1d-4f1d-9c31-07956ac756db] Running
	I0120 14:05:04.700155 1060619 system_pods.go:105] "metrics-server-f79f97bbb-4wzdk" [f224006c-6882-455d-b3e6-45c1a34c5748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:04.700159 1060619 system_pods.go:105] "storage-provisioner" [a862a893-ccaf-45fb-a349-98804054f044] Running
	I0120 14:05:04.700169 1060619 system_pods.go:147] duration metric: took 8.153945ms to wait for k8s-apps to be running ...
	I0120 14:05:04.700179 1060619 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 14:05:04.700240 1060619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:04.716658 1060619 system_svc.go:56] duration metric: took 16.464364ms WaitForService to wait for kubelet
	I0120 14:05:04.716694 1060619 kubeadm.go:582] duration metric: took 4m27.518718562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:05:04.716715 1060619 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:05:04.720144 1060619 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:05:04.720183 1060619 node_conditions.go:123] node cpu capacity is 2
	I0120 14:05:04.720205 1060619 node_conditions.go:105] duration metric: took 3.486041ms to run NodePressure ...
	I0120 14:05:04.720220 1060619 start.go:241] waiting for startup goroutines ...
	I0120 14:05:04.720227 1060619 start.go:246] waiting for cluster config update ...
	I0120 14:05:04.720238 1060619 start.go:255] writing updated cluster config ...
	I0120 14:05:04.720581 1060619 ssh_runner.go:195] Run: rm -f paused
	I0120 14:05:04.773678 1060619 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 14:05:04.775933 1060619 out.go:177] * Done! kubectl is now configured to use "no-preload-097312" cluster and "default" namespace by default
	I0120 14:05:00.367543 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:02.867886 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:04.870609 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:06.365969 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.366715 1063160 main.go:141] libmachine: (newest-cni-488874) found domain IP: 192.168.50.166
	I0120 14:05:06.366743 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has current primary IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.366751 1063160 main.go:141] libmachine: (newest-cni-488874) reserving static IP address...
	I0120 14:05:06.367368 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "newest-cni-488874", mac: "52:54:00:01:cb:b8", ip: "192.168.50.166"} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.367396 1063160 main.go:141] libmachine: (newest-cni-488874) reserved static IP address 192.168.50.166 for domain newest-cni-488874
	I0120 14:05:06.367422 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | skip adding static IP to network mk-newest-cni-488874 - found existing host DHCP lease matching {name: "newest-cni-488874", mac: "52:54:00:01:cb:b8", ip: "192.168.50.166"}
	I0120 14:05:06.367441 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Getting to WaitForSSH function...
	I0120 14:05:06.367475 1063160 main.go:141] libmachine: (newest-cni-488874) waiting for SSH...
	I0120 14:05:06.369915 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.370396 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.370436 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.370661 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Using SSH client type: external
	I0120 14:05:06.370702 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa (-rw-------)
	I0120 14:05:06.370734 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:05:06.370751 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | About to run SSH command:
	I0120 14:05:06.370765 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | exit 0
	I0120 14:05:06.497942 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | SSH cmd err, output: <nil>: 
	I0120 14:05:06.498433 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetConfigRaw
	I0120 14:05:06.499140 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
	I0120 14:05:06.502365 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.502778 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.502860 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.503147 1063160 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/config.json ...
	I0120 14:05:06.503544 1063160 machine.go:93] provisionDockerMachine start ...
	I0120 14:05:06.503577 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:06.503843 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:06.506590 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.507108 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.507143 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.507356 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:06.507593 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.507757 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.507886 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:06.508072 1063160 main.go:141] libmachine: Using SSH client type: native
	I0120 14:05:06.508364 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.166 22 <nil> <nil>}
	I0120 14:05:06.508383 1063160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:05:06.617955 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:05:06.617985 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetMachineName
	I0120 14:05:06.618222 1063160 buildroot.go:166] provisioning hostname "newest-cni-488874"
	I0120 14:05:06.618235 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetMachineName
	I0120 14:05:06.618406 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:06.621376 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.621821 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.621848 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.622132 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:06.622353 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.622542 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.622802 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:06.623048 1063160 main.go:141] libmachine: Using SSH client type: native
	I0120 14:05:06.623283 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.166 22 <nil> <nil>}
	I0120 14:05:06.623305 1063160 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-488874 && echo "newest-cni-488874" | sudo tee /etc/hostname
	I0120 14:05:06.743983 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-488874
	
	I0120 14:05:06.744012 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:06.747395 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.747789 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.747822 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.748024 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:06.748243 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.748471 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.748646 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:06.748824 1063160 main.go:141] libmachine: Using SSH client type: native
	I0120 14:05:06.749137 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.166 22 <nil> <nil>}
	I0120 14:05:06.749160 1063160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-488874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-488874/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-488874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:05:06.864413 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:05:06.864448 1063160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-998973/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-998973/.minikube}
	I0120 14:05:06.864468 1063160 buildroot.go:174] setting up certificates
	I0120 14:05:06.864479 1063160 provision.go:84] configureAuth start
	I0120 14:05:06.864489 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetMachineName
	I0120 14:05:06.864804 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
	I0120 14:05:06.867729 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.868082 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.868115 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.868340 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:06.870939 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.871411 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.871441 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.871576 1063160 provision.go:143] copyHostCerts
	I0120 14:05:06.871647 1063160 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem, removing ...
	I0120 14:05:06.871668 1063160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem
	I0120 14:05:06.871737 1063160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem (1082 bytes)
	I0120 14:05:06.871841 1063160 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem, removing ...
	I0120 14:05:06.871850 1063160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem
	I0120 14:05:06.871886 1063160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem (1123 bytes)
	I0120 14:05:06.871962 1063160 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem, removing ...
	I0120 14:05:06.871969 1063160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem
	I0120 14:05:06.871996 1063160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem (1675 bytes)
	I0120 14:05:06.872059 1063160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem org=jenkins.newest-cni-488874 san=[127.0.0.1 192.168.50.166 localhost minikube newest-cni-488874]
	I0120 14:05:06.934937 1063160 provision.go:177] copyRemoteCerts
	I0120 14:05:06.934999 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:05:06.935043 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:06.938241 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.938542 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:06.938570 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:06.938812 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:06.938991 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:06.939188 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:06.939330 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:07.032002 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:05:04.925140 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:06.925415 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:08.925905 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:07.061467 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:05:07.089322 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 14:05:07.116452 1063160 provision.go:87] duration metric: took 251.958223ms to configureAuth
	I0120 14:05:07.116486 1063160 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:05:07.116712 1063160 config.go:182] Loaded profile config "newest-cni-488874": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:05:07.116729 1063160 machine.go:96] duration metric: took 613.164362ms to provisionDockerMachine
	I0120 14:05:07.116742 1063160 start.go:293] postStartSetup for "newest-cni-488874" (driver="kvm2")
	I0120 14:05:07.116756 1063160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:05:07.116795 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:07.117251 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:05:07.117292 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:07.120232 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.120713 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:07.120748 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.120914 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:07.121122 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:07.121323 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:07.121518 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:07.203944 1063160 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:05:07.208749 1063160 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:05:07.208779 1063160 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/addons for local assets ...
	I0120 14:05:07.208840 1063160 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/files for local assets ...
	I0120 14:05:07.208922 1063160 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem -> 10062632.pem in /etc/ssl/certs
	I0120 14:05:07.209070 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:05:07.219151 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /etc/ssl/certs/10062632.pem (1708 bytes)
	I0120 14:05:07.247592 1063160 start.go:296] duration metric: took 130.829742ms for postStartSetup
	I0120 14:05:07.247660 1063160 fix.go:56] duration metric: took 20.079818838s for fixHost
	I0120 14:05:07.247693 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:07.250441 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.250887 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:07.250933 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.251219 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:07.251458 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:07.251656 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:07.251876 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:07.252078 1063160 main.go:141] libmachine: Using SSH client type: native
	I0120 14:05:07.252282 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.166 22 <nil> <nil>}
	I0120 14:05:07.252292 1063160 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:05:07.358734 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381907.331379316
	
	I0120 14:05:07.358760 1063160 fix.go:216] guest clock: 1737381907.331379316
	I0120 14:05:07.358768 1063160 fix.go:229] Guest: 2025-01-20 14:05:07.331379316 +0000 UTC Remote: 2025-01-20 14:05:07.247665057 +0000 UTC m=+20.241792947 (delta=83.714259ms)
	I0120 14:05:07.358792 1063160 fix.go:200] guest clock delta is within tolerance: 83.714259ms
	I0120 14:05:07.358800 1063160 start.go:83] releasing machines lock for "newest-cni-488874", held for 20.190993038s
	I0120 14:05:07.358825 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:07.359172 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
	I0120 14:05:07.361973 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.362383 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:07.362417 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.362637 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:07.363168 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:07.363391 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:07.363523 1063160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:05:07.363572 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:07.363632 1063160 ssh_runner.go:195] Run: cat /version.json
	I0120 14:05:07.363664 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:07.367042 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.367317 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.367442 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:07.367578 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.367611 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:07.367813 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:07.367922 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:07.367948 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:07.367966 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:07.368128 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:07.368161 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:07.368279 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:07.368454 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:07.368654 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:07.478493 1063160 ssh_runner.go:195] Run: systemctl --version
	I0120 14:05:07.485765 1063160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:05:07.494763 1063160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:05:07.494869 1063160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:05:07.517499 1063160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:05:07.517538 1063160 start.go:495] detecting cgroup driver to use...
	I0120 14:05:07.517617 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 14:05:07.549661 1063160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 14:05:07.566559 1063160 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:05:07.566632 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:05:07.582210 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:05:07.597548 1063160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:05:07.716948 1063160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:05:07.905168 1063160 docker.go:233] disabling docker service ...
	I0120 14:05:07.905273 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:05:07.921341 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:05:07.939537 1063160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:05:08.082338 1063160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:05:08.215419 1063160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:05:08.231001 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:05:08.252949 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 14:05:08.264709 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 14:05:08.276797 1063160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 14:05:08.276871 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 14:05:08.290184 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:05:08.302267 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 14:05:08.314508 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:05:08.326383 1063160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:05:08.340055 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 14:05:08.351978 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 14:05:08.365499 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 14:05:08.378256 1063160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:05:08.388926 1063160 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:05:08.389066 1063160 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:05:08.404028 1063160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:05:08.414646 1063160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:08.552547 1063160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 14:05:08.586170 1063160 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 14:05:08.586254 1063160 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 14:05:08.591476 1063160 retry.go:31] will retry after 1.288149502s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0120 14:05:09.881095 1063160 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 14:05:09.887272 1063160 start.go:563] Will wait 60s for crictl version
	I0120 14:05:09.887354 1063160 ssh_runner.go:195] Run: which crictl
	I0120 14:05:09.892059 1063160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:05:09.937510 1063160 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0120 14:05:09.937590 1063160 ssh_runner.go:195] Run: containerd --version
	I0120 14:05:09.970847 1063160 ssh_runner.go:195] Run: containerd --version
	I0120 14:05:10.000771 1063160 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
	I0120 14:05:10.002363 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
	I0120 14:05:10.005275 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:10.005716 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:10.005747 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:10.006008 1063160 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 14:05:10.011138 1063160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:05:10.027519 1063160 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0120 14:05:07.369190 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:09.867683 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:10.029161 1063160 kubeadm.go:883] updating cluster {Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:05:10.029378 1063160 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 14:05:10.029484 1063160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:05:10.069810 1063160 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:05:10.069842 1063160 containerd.go:534] Images already preloaded, skipping extraction
	I0120 14:05:10.069913 1063160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:05:10.108630 1063160 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:05:10.108657 1063160 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:05:10.108667 1063160 kubeadm.go:934] updating node { 192.168.50.166 8443 v1.32.0 containerd true true} ...
	I0120 14:05:10.108787 1063160 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-488874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:05:10.108847 1063160 ssh_runner.go:195] Run: sudo crictl info
	I0120 14:05:10.145581 1063160 cni.go:84] Creating CNI manager for ""
	I0120 14:05:10.145612 1063160 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:05:10.145629 1063160 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0120 14:05:10.145661 1063160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.166 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-488874 NodeName:newest-cni-488874 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:05:10.145821 1063160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-488874"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.166"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.166"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:05:10.145921 1063160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:05:10.158654 1063160 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:05:10.158759 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:05:10.169232 1063160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0120 14:05:10.188001 1063160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:05:10.208552 1063160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0120 14:05:10.228712 1063160 ssh_runner.go:195] Run: grep 192.168.50.166	control-plane.minikube.internal$ /etc/hosts
	I0120 14:05:10.233325 1063160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:05:10.247712 1063160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:10.372513 1063160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:10.395357 1063160 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874 for IP: 192.168.50.166
	I0120 14:05:10.395381 1063160 certs.go:194] generating shared ca certs ...
	I0120 14:05:10.395397 1063160 certs.go:226] acquiring lock for ca certs: {Name:mk3b53704e4ec52de26582ed9269b5c3b0eb7914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:10.395563 1063160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key
	I0120 14:05:10.395622 1063160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key
	I0120 14:05:10.395634 1063160 certs.go:256] generating profile certs ...
	I0120 14:05:10.395725 1063160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/client.key
	I0120 14:05:10.395793 1063160 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/apiserver.key.2d5efe46
	I0120 14:05:10.395840 1063160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/proxy-client.key
	I0120 14:05:10.396009 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem (1338 bytes)
	W0120 14:05:10.396059 1063160 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263_empty.pem, impossibly tiny 0 bytes
	I0120 14:05:10.396065 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:05:10.396168 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:05:10.396209 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:05:10.396263 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem (1675 bytes)
	I0120 14:05:10.396327 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem (1708 bytes)
	I0120 14:05:10.397217 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:05:10.438100 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 14:05:10.470318 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:05:10.503429 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:05:10.548514 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0120 14:05:10.591209 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 14:05:10.620013 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:05:10.654243 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 14:05:10.682296 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /usr/share/ca-certificates/10062632.pem (1708 bytes)
	I0120 14:05:10.711242 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:05:10.740118 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem --> /usr/share/ca-certificates/1006263.pem (1338 bytes)
	I0120 14:05:10.769557 1063160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:05:10.790416 1063160 ssh_runner.go:195] Run: openssl version
	I0120 14:05:10.798858 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006263.pem && ln -fs /usr/share/ca-certificates/1006263.pem /etc/ssl/certs/1006263.pem"
	I0120 14:05:10.812120 1063160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006263.pem
	I0120 14:05:10.818021 1063160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:56 /usr/share/ca-certificates/1006263.pem
	I0120 14:05:10.818106 1063160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006263.pem
	I0120 14:05:10.825236 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006263.pem /etc/ssl/certs/51391683.0"
	I0120 14:05:10.837376 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10062632.pem && ln -fs /usr/share/ca-certificates/10062632.pem /etc/ssl/certs/10062632.pem"
	I0120 14:05:10.851234 1063160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10062632.pem
	I0120 14:05:10.856673 1063160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:56 /usr/share/ca-certificates/10062632.pem
	I0120 14:05:10.856762 1063160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10062632.pem
	I0120 14:05:10.863757 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10062632.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:05:10.876948 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:05:10.889955 1063160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:05:10.895521 1063160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:48 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:05:10.895628 1063160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:05:10.902527 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:05:10.915727 1063160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:05:10.921530 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:05:10.928703 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:05:10.936028 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:05:10.943185 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:05:10.950536 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:05:10.957927 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:05:10.965037 1063160 kubeadm.go:392] StartCluster: {Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:05:10.965163 1063160 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 14:05:10.965237 1063160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:05:11.016895 1063160 cri.go:89] found id: "95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460"
	I0120 14:05:11.016941 1063160 cri.go:89] found id: "8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca"
	I0120 14:05:11.016950 1063160 cri.go:89] found id: "d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145"
	I0120 14:05:11.016984 1063160 cri.go:89] found id: "6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74"
	I0120 14:05:11.017004 1063160 cri.go:89] found id: "f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9"
	I0120 14:05:11.017015 1063160 cri.go:89] found id: "051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578"
	I0120 14:05:11.017023 1063160 cri.go:89] found id: "00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20"
	I0120 14:05:11.017028 1063160 cri.go:89] found id: "6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb"
	I0120 14:05:11.017032 1063160 cri.go:89] found id: ""
	I0120 14:05:11.017100 1063160 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 14:05:11.034091 1063160 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T14:05:11Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 14:05:11.034236 1063160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:05:11.046808 1063160 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:05:11.046831 1063160 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:05:11.046883 1063160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:05:11.059273 1063160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:05:11.060135 1063160 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-488874" does not appear in /home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:05:11.060560 1063160 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-998973/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-488874" cluster setting kubeconfig missing "newest-cni-488874" context setting]
	I0120 14:05:11.061427 1063160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:11.063136 1063160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:05:11.076655 1063160 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.166
	I0120 14:05:11.076709 1063160 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:05:11.076734 1063160 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0120 14:05:11.076801 1063160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:05:11.128107 1063160 cri.go:89] found id: "95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460"
	I0120 14:05:11.128141 1063160 cri.go:89] found id: "8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca"
	I0120 14:05:11.128147 1063160 cri.go:89] found id: "d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145"
	I0120 14:05:11.128153 1063160 cri.go:89] found id: "6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74"
	I0120 14:05:11.128157 1063160 cri.go:89] found id: "f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9"
	I0120 14:05:11.128163 1063160 cri.go:89] found id: "051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578"
	I0120 14:05:11.128167 1063160 cri.go:89] found id: "00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20"
	I0120 14:05:11.128171 1063160 cri.go:89] found id: "6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb"
	I0120 14:05:11.128175 1063160 cri.go:89] found id: ""
	I0120 14:05:11.128183 1063160 cri.go:252] Stopping containers: [95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460 8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145 6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74 f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9 051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578 00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20 6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb]
	I0120 14:05:11.128278 1063160 ssh_runner.go:195] Run: which crictl
	I0120 14:05:11.132849 1063160 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460 8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145 6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74 f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9 051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578 00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20 6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb
	I0120 14:05:11.182117 1063160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:05:11.202340 1063160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:11.216641 1063160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:11.216665 1063160 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:11.216712 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:11.227893 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:11.227979 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:11.239065 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:11.250423 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:11.250491 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:11.261814 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:11.272846 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:11.272913 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:11.284218 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:11.294670 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:11.294762 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:11.306384 1063160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:11.318728 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:05:11.491305 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:05:10.918089 1060798 pod_ready.go:82] duration metric: took 4m0.000161453s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" ...
	E0120 14:05:10.918131 1060798 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:05:10.918160 1060798 pod_ready.go:39] duration metric: took 4m13.053682746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:10.918201 1060798 kubeadm.go:597] duration metric: took 4m21.286948978s to restartPrimaryControlPlane
	W0120 14:05:10.918306 1060798 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:05:10.918352 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0120 14:05:12.920615 1060798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.002231911s)
	I0120 14:05:12.920701 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:12.942116 1060798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:12.954775 1060798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:12.966775 1060798 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:12.966807 1060798 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:12.966883 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:12.977602 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:12.977684 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:12.989019 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:13.000820 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:13.000898 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:13.016644 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:13.031439 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:13.031528 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:13.042457 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:13.055593 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:13.055669 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:13.068674 1060798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:05:13.130131 1060798 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:05:13.130201 1060798 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:05:13.252056 1060798 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:05:13.252208 1060798 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:05:13.252350 1060798 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:05:13.262351 1060798 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:05:13.264231 1060798 out.go:235]   - Generating certificates and keys ...
	I0120 14:05:13.264325 1060798 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:05:13.264382 1060798 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:05:13.264450 1060798 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:05:13.264503 1060798 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:05:13.264566 1060798 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:05:13.264617 1060798 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:05:13.264693 1060798 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:05:13.264816 1060798 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:05:13.264980 1060798 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:05:13.265097 1060798 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:05:13.265160 1060798 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:05:13.265250 1060798 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:05:13.376018 1060798 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:05:13.789822 1060798 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:05:13.884391 1060798 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:05:14.207456 1060798 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:05:14.442708 1060798 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:05:14.443884 1060798 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:05:14.447802 1060798 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:05:11.868693 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:13.869685 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:12.532029 1063160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.040673038s)
	I0120 14:05:12.532063 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:05:12.818119 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:05:12.907512 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:05:12.995770 1063160 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:12.995910 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:13.496795 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:13.996059 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:14.022569 1063160 api_server.go:72] duration metric: took 1.026799902s to wait for apiserver process to appear ...
	I0120 14:05:14.022606 1063160 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:14.022633 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:14.023253 1063160 api_server.go:269] stopped: https://192.168.50.166:8443/healthz: Get "https://192.168.50.166:8443/healthz": dial tcp 192.168.50.166:8443: connect: connection refused
	I0120 14:05:14.523764 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:14.449454 1060798 out.go:235]   - Booting up control plane ...
	I0120 14:05:14.449591 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:05:14.449723 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:05:14.450498 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:05:14.474336 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:05:14.486142 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:05:14.486368 1060798 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:05:14.656630 1060798 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:05:14.656842 1060798 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:05:15.658053 1060798 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001688461s
	I0120 14:05:15.658185 1060798 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:05:18.095415 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:05:18.095452 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:05:18.095472 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:18.117734 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:05:18.117775 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:05:18.523010 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:18.531327 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:05:18.531374 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:05:19.023177 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:19.033109 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:05:19.033139 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:05:19.522763 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:19.546252 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:05:19.546291 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:05:20.022811 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:20.029777 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 200:
	ok
	I0120 14:05:20.043595 1063160 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:20.043704 1063160 api_server.go:131] duration metric: took 6.021087892s to wait for apiserver health ...
	I0120 14:05:20.043732 1063160 cni.go:84] Creating CNI manager for ""
	I0120 14:05:20.043753 1063160 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:05:20.046751 1063160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:16.368848 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:18.372711 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:20.048206 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:20.067542 1063160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:20.116639 1063160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:20.153739 1063160 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:20.153793 1063160 system_pods.go:61] "coredns-668d6bf9bc-mpv44" [382315fb-8bd3-48a2-86ec-ae0f5f2f32a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:20.153806 1063160 system_pods.go:61] "coredns-668d6bf9bc-t8nnm" [92f31a93-c6cc-414f-9cd2-92e65e91dafd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:20.153818 1063160 system_pods.go:61] "etcd-newest-cni-488874" [71af6d87-d4e6-4cd3-85ee-88500ddac52f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:05:20.153835 1063160 system_pods.go:61] "kube-apiserver-newest-cni-488874" [36f48149-363f-4ed7-a528-d3f5dc384634] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:05:20.153857 1063160 system_pods.go:61] "kube-controller-manager-newest-cni-488874" [56662aa4-63e6-48d2-aaa3-99b69a9cbab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:05:20.153874 1063160 system_pods.go:61] "kube-proxy-cs8qw" [36baa82d-ba63-4777-894f-8c105690264d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:05:20.153894 1063160 system_pods.go:61] "kube-scheduler-newest-cni-488874" [1113f67a-580c-4b20-ad28-da730b5d6292] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:05:20.153914 1063160 system_pods.go:61] "metrics-server-f79f97bbb-kwwbp" [bf28109f-6958-41ec-b019-e0419f4a5093] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:20.153926 1063160 system_pods.go:61] "storage-provisioner" [e8e2b6ce-d4b0-49d9-9e7d-c771eff38584] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:05:20.153937 1063160 system_pods.go:74] duration metric: took 37.269372ms to wait for pod list to return data ...
	I0120 14:05:20.153955 1063160 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:05:20.165337 1063160 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:05:20.165386 1063160 node_conditions.go:123] node cpu capacity is 2
	I0120 14:05:20.165404 1063160 node_conditions.go:105] duration metric: took 11.443297ms to run NodePressure ...
	I0120 14:05:20.165431 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:05:20.606701 1063160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:20.630689 1063160 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:20.630720 1063160 kubeadm.go:597] duration metric: took 9.583881876s to restartPrimaryControlPlane
	I0120 14:05:20.630735 1063160 kubeadm.go:394] duration metric: took 9.665718124s to StartCluster
	I0120 14:05:20.630770 1063160 settings.go:142] acquiring lock: {Name:mked7f2376b8a06c64dcfd911ab4b0d95ecdbe2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:20.630867 1063160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:05:20.632794 1063160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:20.633135 1063160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 14:05:20.633353 1063160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:20.633478 1063160 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-488874"
	I0120 14:05:20.633502 1063160 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-488874"
	I0120 14:05:20.633573 1063160 addons.go:69] Setting dashboard=true in profile "newest-cni-488874"
	W0120 14:05:20.633590 1063160 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:20.633579 1063160 addons.go:69] Setting metrics-server=true in profile "newest-cni-488874"
	I0120 14:05:20.633598 1063160 addons.go:238] Setting addon dashboard=true in "newest-cni-488874"
	W0120 14:05:20.633606 1063160 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:20.633607 1063160 addons.go:238] Setting addon metrics-server=true in "newest-cni-488874"
	W0120 14:05:20.633617 1063160 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:20.633629 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
	I0120 14:05:20.633635 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
	I0120 14:05:20.633545 1063160 addons.go:69] Setting default-storageclass=true in profile "newest-cni-488874"
	I0120 14:05:20.633763 1063160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-488874"
	I0120 14:05:20.634080 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.634122 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.634170 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.634233 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.633533 1063160 config.go:182] Loaded profile config "newest-cni-488874": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:05:20.634250 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.634302 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.633644 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
	I0120 14:05:20.634680 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.634727 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.635689 1063160 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:20.637584 1063160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:20.656161 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0120 14:05:20.656828 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.657442 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.657461 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.657809 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.657871 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43287
	I0120 14:05:20.658038 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0120 14:05:20.658145 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0120 14:05:20.658269 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.658336 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.658943 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.658989 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.659328 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.659345 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.659720 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.659880 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.660060 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.660093 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.660172 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
	I0120 14:05:20.660415 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.661044 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.661120 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.665263 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.665288 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.665954 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.666578 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.666620 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.683034 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I0120 14:05:20.683326 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0120 14:05:20.684199 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.684289 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.685038 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.685070 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.685247 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.685265 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.685542 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.685774 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
	I0120 14:05:20.685975 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.686146 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
	I0120 14:05:20.691280 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0120 14:05:20.691740 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.692270 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.692293 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.692739 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.693015 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
	I0120 14:05:20.731352 1063160 addons.go:238] Setting addon default-storageclass=true in "newest-cni-488874"
	W0120 14:05:20.731384 1063160 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:20.731420 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
	I0120 14:05:20.731819 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.731899 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.732143 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:20.732149 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:20.732234 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:20.734806 1063160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:20.734814 1063160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:20.735922 1063160 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:20.736428 1063160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:20.736456 1063160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:20.736487 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:20.737437 1063160 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:21.661193 1060798 kubeadm.go:310] [api-check] The API server is healthy after 6.00301289s
	I0120 14:05:21.679639 1060798 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:05:21.697225 1060798 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:05:21.729640 1060798 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:05:21.730176 1060798 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-553677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:05:21.743570 1060798 kubeadm.go:310] [bootstrap-token] Using token: qgu27t.iap2ani2n2k7zkjw
	I0120 14:05:20.738718 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:20.738745 1063160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:20.738782 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:20.739196 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:20.739219 1063160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:20.739249 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:20.741831 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.742632 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:20.742658 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.743356 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.743407 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:20.743639 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:20.743790 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.743820 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:20.744020 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:20.744122 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:20.744163 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.744243 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:20.744334 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:20.744350 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.744654 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:20.744707 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:20.744862 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:20.744869 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:20.744998 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:20.745067 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:20.749103 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:20.774519 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0120 14:05:20.774980 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.775531 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.775558 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.775918 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.776511 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:20.776562 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:20.797766 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0120 14:05:20.798308 1063160 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:20.798841 1063160 main.go:141] libmachine: Using API Version  1
	I0120 14:05:20.798869 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:20.799392 1063160 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:20.799597 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
	I0120 14:05:20.802504 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
	I0120 14:05:20.802837 1063160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:20.802856 1063160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:20.802878 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
	I0120 14:05:20.806526 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.807070 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
	I0120 14:05:20.807096 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
	I0120 14:05:20.807327 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
	I0120 14:05:20.807558 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
	I0120 14:05:20.807743 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
	I0120 14:05:20.807887 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
	I0120 14:05:20.920926 1063160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:20.942953 1063160 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:20.943092 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:20.964946 1063160 api_server.go:72] duration metric: took 331.745037ms to wait for apiserver process to appear ...
	I0120 14:05:20.965007 1063160 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:20.965033 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
	I0120 14:05:20.974335 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 200:
	ok
	I0120 14:05:20.976530 1063160 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:20.976563 1063160 api_server.go:131] duration metric: took 11.547041ms to wait for apiserver health ...
	I0120 14:05:20.976576 1063160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:20.988080 1063160 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:20.988125 1063160 system_pods.go:61] "coredns-668d6bf9bc-mpv44" [382315fb-8bd3-48a2-86ec-ae0f5f2f32a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:20.988136 1063160 system_pods.go:61] "coredns-668d6bf9bc-t8nnm" [92f31a93-c6cc-414f-9cd2-92e65e91dafd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:20.988146 1063160 system_pods.go:61] "etcd-newest-cni-488874" [71af6d87-d4e6-4cd3-85ee-88500ddac52f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:05:20.988160 1063160 system_pods.go:61] "kube-apiserver-newest-cni-488874" [36f48149-363f-4ed7-a528-d3f5dc384634] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:05:20.988169 1063160 system_pods.go:61] "kube-controller-manager-newest-cni-488874" [56662aa4-63e6-48d2-aaa3-99b69a9cbab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:05:20.988179 1063160 system_pods.go:61] "kube-proxy-cs8qw" [36baa82d-ba63-4777-894f-8c105690264d] Running
	I0120 14:05:20.988189 1063160 system_pods.go:61] "kube-scheduler-newest-cni-488874" [1113f67a-580c-4b20-ad28-da730b5d6292] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:05:20.988208 1063160 system_pods.go:61] "metrics-server-f79f97bbb-kwwbp" [bf28109f-6958-41ec-b019-e0419f4a5093] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:20.988217 1063160 system_pods.go:61] "storage-provisioner" [e8e2b6ce-d4b0-49d9-9e7d-c771eff38584] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:05:20.988232 1063160 system_pods.go:74] duration metric: took 11.646417ms to wait for pod list to return data ...
	I0120 14:05:20.988247 1063160 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:20.992460 1063160 default_sa.go:45] found service account: "default"
	I0120 14:05:20.992499 1063160 default_sa.go:55] duration metric: took 4.243767ms for default service account to be created ...
	I0120 14:05:20.992516 1063160 kubeadm.go:582] duration metric: took 359.326348ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0120 14:05:20.992566 1063160 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:05:21.000430 1063160 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:05:21.000469 1063160 node_conditions.go:123] node cpu capacity is 2
	I0120 14:05:21.000485 1063160 node_conditions.go:105] duration metric: took 7.912327ms to run NodePressure ...
	I0120 14:05:21.000502 1063160 start.go:241] waiting for startup goroutines ...
	I0120 14:05:21.007595 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:21.171225 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:21.171261 1063160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:21.237055 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:21.319699 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:21.319729 1063160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:21.403010 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:21.403048 1063160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:21.420219 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:21.420263 1063160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:21.542358 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:21.581020 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:21.581058 1063160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:21.654677 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:21.654718 1063160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:21.830895 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:21.830928 1063160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:21.935679 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:21.935718 1063160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:21.745349 1060798 out.go:235]   - Configuring RBAC rules ...
	I0120 14:05:21.745503 1060798 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:05:21.754153 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:05:21.765952 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:05:21.771799 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:05:21.779054 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:05:21.785557 1060798 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:05:22.071797 1060798 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:05:22.539495 1060798 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:05:23.070019 1060798 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:05:23.071157 1060798 kubeadm.go:310] 
	I0120 14:05:23.071304 1060798 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:05:23.071330 1060798 kubeadm.go:310] 
	I0120 14:05:23.071427 1060798 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:05:23.071438 1060798 kubeadm.go:310] 
	I0120 14:05:23.071470 1060798 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:05:23.071548 1060798 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:05:23.071621 1060798 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:05:23.071631 1060798 kubeadm.go:310] 
	I0120 14:05:23.071735 1060798 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:05:23.071777 1060798 kubeadm.go:310] 
	I0120 14:05:23.071865 1060798 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:05:23.071878 1060798 kubeadm.go:310] 
	I0120 14:05:23.071948 1060798 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:05:23.072051 1060798 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:05:23.072144 1060798 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:05:23.072164 1060798 kubeadm.go:310] 
	I0120 14:05:23.072309 1060798 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:05:23.072412 1060798 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:05:23.072423 1060798 kubeadm.go:310] 
	I0120 14:05:23.072537 1060798 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
	I0120 14:05:23.072690 1060798 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114 \
	I0120 14:05:23.072722 1060798 kubeadm.go:310] 	--control-plane 
	I0120 14:05:23.072736 1060798 kubeadm.go:310] 
	I0120 14:05:23.072848 1060798 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:05:23.072867 1060798 kubeadm.go:310] 
	I0120 14:05:23.072985 1060798 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
	I0120 14:05:23.073167 1060798 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114 
	I0120 14:05:23.075375 1060798 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:05:23.075417 1060798 cni.go:84] Creating CNI manager for ""
	I0120 14:05:23.075445 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 14:05:23.077601 1060798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:22.089375 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:22.089408 1063160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:22.106543 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098904515s)
	I0120 14:05:22.106605 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:22.106616 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:22.106956 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:22.106976 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:22.106987 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:22.106995 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:22.107275 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:22.107300 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:22.115066 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:22.115096 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:22.115528 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:22.115548 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:22.115574 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Closing plugin on server side
	I0120 14:05:22.180167 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:22.180241 1063160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:22.292751 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:22.292788 1063160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:22.338119 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:22.338160 1063160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:22.382828 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:23.300334 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.063234672s)
	I0120 14:05:23.300414 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:23.300431 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:23.300841 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Closing plugin on server side
	I0120 14:05:23.302811 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:23.302833 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:23.302843 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:23.302852 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:23.303171 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:23.303199 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:23.485044 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.942633159s)
	I0120 14:05:23.485191 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:23.485213 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:23.485695 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:23.485755 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:23.485721 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Closing plugin on server side
	I0120 14:05:23.485784 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:23.485883 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:23.486182 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:23.486207 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:23.486222 1063160 addons.go:479] Verifying addon metrics-server=true in "newest-cni-488874"
	I0120 14:05:24.106931 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.72402581s)
	I0120 14:05:24.107000 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:24.107019 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:24.107417 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:24.107441 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:24.107460 1063160 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:24.107472 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
	I0120 14:05:24.107745 1063160 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:24.107766 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:24.109654 1063160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-488874 addons enable metrics-server
	
	I0120 14:05:24.111210 1063160 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:23.079121 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:23.091937 1060798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:23.116874 1060798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:23.116939 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:23.116978 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-553677 minikube.k8s.io/updated_at=2025_01_20T14_05_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-553677 minikube.k8s.io/primary=true
	I0120 14:05:23.148895 1060798 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:23.378558 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:23.879347 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:24.112676 1063160 addons.go:514] duration metric: took 3.479328497s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:24.112745 1063160 start.go:246] waiting for cluster config update ...
	I0120 14:05:24.112766 1063160 start.go:255] writing updated cluster config ...
	I0120 14:05:24.113104 1063160 ssh_runner.go:195] Run: rm -f paused
	I0120 14:05:24.170991 1063160 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 14:05:24.173034 1063160 out.go:177] * Done! kubectl is now configured to use "newest-cni-488874" cluster and "default" namespace by default
	I0120 14:05:20.868649 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:22.869758 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.870554 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.379349 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:24.879187 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:25.379285 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:25.879105 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:26.379133 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:26.478857 1060798 kubeadm.go:1113] duration metric: took 3.36197683s to wait for elevateKubeSystemPrivileges
	I0120 14:05:26.478907 1060798 kubeadm.go:394] duration metric: took 4m36.924060891s to StartCluster
	I0120 14:05:26.478935 1060798 settings.go:142] acquiring lock: {Name:mked7f2376b8a06c64dcfd911ab4b0d95ecdbe2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:26.479036 1060798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 14:05:26.481214 1060798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:26.481626 1060798 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 14:05:26.481760 1060798 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:26.481876 1060798 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-553677"
	I0120 14:05:26.481896 1060798 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-553677"
	W0120 14:05:26.481905 1060798 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:26.481906 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:05:26.481916 1060798 addons.go:69] Setting default-storageclass=true in profile "embed-certs-553677"
	I0120 14:05:26.481942 1060798 addons.go:69] Setting metrics-server=true in profile "embed-certs-553677"
	I0120 14:05:26.481958 1060798 addons.go:238] Setting addon metrics-server=true in "embed-certs-553677"
	W0120 14:05:26.481970 1060798 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:26.481989 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.481957 1060798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-553677"
	I0120 14:05:26.481936 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.482431 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.482468 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.481939 1060798 addons.go:69] Setting dashboard=true in profile "embed-certs-553677"
	I0120 14:05:26.482542 1060798 addons.go:238] Setting addon dashboard=true in "embed-certs-553677"
	W0120 14:05:26.482554 1060798 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:26.482556 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.482578 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.482592 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.482543 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.482710 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.482972 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.483025 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.483426 1060798 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:26.485000 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:26.503670 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
	I0120 14:05:26.503915 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I0120 14:05:26.503956 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0120 14:05:26.504290 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.504434 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.505146 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.505154 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.505171 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.505175 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.505608 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.505613 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.505894 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.506345 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.506391 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.506479 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.506502 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.506645 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.506751 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.507010 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.507160 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0120 14:05:26.507428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.507754 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.508311 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.508336 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.508797 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.509512 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.509563 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.512304 1060798 addons.go:238] Setting addon default-storageclass=true in "embed-certs-553677"
	W0120 14:05:26.512327 1060798 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:26.512357 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
	I0120 14:05:26.512623 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.512672 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.529326 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0120 14:05:26.530030 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.530626 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.530648 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.530699 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
	I0120 14:05:26.530970 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
	I0120 14:05:26.531055 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.531380 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.531456 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.531589 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.531641 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.531661 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.532129 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.532156 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.532234 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.532425 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.532428 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0120 14:05:26.532828 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.532931 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.533311 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.535196 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.535230 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.535639 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.536245 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 14:05:26.536293 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:26.537777 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.538423 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.538544 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.540631 1060798 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:26.540639 1060798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:26.540707 1060798 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:26.541975 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:26.541997 1060798 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:26.542019 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.542075 1060798 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:26.542094 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:26.542115 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.544926 1060798 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:26.546368 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:26.546392 1060798 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:26.546418 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.549578 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.549713 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.553664 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.553690 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.553947 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.554117 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.554221 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.554305 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.554626 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.554889 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.554914 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.555102 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.555168 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.555182 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.555284 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.555340 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.555596 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.555691 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.555715 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.555883 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.556015 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.560724 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0120 14:05:26.561235 1060798 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:26.561723 1060798 main.go:141] libmachine: Using API Version  1
	I0120 14:05:26.561738 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:26.562059 1060798 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:26.562297 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
	I0120 14:05:26.564026 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
	I0120 14:05:26.564278 1060798 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:26.564290 1060798 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:26.564304 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
	I0120 14:05:26.567858 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.568393 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
	I0120 14:05:26.568433 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
	I0120 14:05:26.568556 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
	I0120 14:05:26.568742 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
	I0120 14:05:26.568910 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
	I0120 14:05:26.569124 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
	I0120 14:05:26.773077 1060798 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:26.800362 1060798 node_ready.go:35] waiting up to 6m0s for node "embed-certs-553677" to be "Ready" ...
	I0120 14:05:26.843740 1060798 node_ready.go:49] node "embed-certs-553677" has status "Ready":"True"
	I0120 14:05:26.843780 1060798 node_ready.go:38] duration metric: took 43.372924ms for node "embed-certs-553677" to be "Ready" ...
	I0120 14:05:26.843796 1060798 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:26.873119 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:26.873149 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:26.874981 1060798 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:26.906789 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:26.940145 1060798 pod_ready.go:93] pod "etcd-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:26.940190 1060798 pod_ready.go:82] duration metric: took 65.181123ms for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:26.940211 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:26.969325 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:26.969365 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:26.969405 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:26.989583 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:26.989615 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:27.153235 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:27.153271 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:27.177818 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:27.177844 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:27.342345 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:27.342379 1060798 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:27.474579 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:27.474615 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:27.480859 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:27.583861 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:27.583897 1060798 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:27.625368 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:27.625405 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:27.625755 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:27.625774 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:27.625784 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:27.625792 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:27.626090 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:27.626113 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:27.626136 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:27.642156 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:27.642194 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:27.642522 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:27.642553 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:27.884652 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:27.884699 1060798 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:28.031119 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:28.031155 1060798 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:28.145159 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:28.145199 1060798 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:28.273725 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:28.273765 1060798 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:28.506539 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:28.887655 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.918209178s)
	I0120 14:05:28.887715 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:28.887730 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:28.888066 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:28.888078 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:28.888089 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:28.888098 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:28.889637 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:28.889660 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:28.889672 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:28.971702 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:27.380463 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:29.867706 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:29.421863 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.940948518s)
	I0120 14:05:29.421940 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:29.421960 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:29.422340 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:29.422359 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:29.422381 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:29.422399 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:29.422412 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:29.422673 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:29.422690 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:29.422702 1060798 addons.go:479] Verifying addon metrics-server=true in "embed-certs-553677"
	I0120 14:05:29.422725 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
	I0120 14:05:30.228977 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.722367434s)
	I0120 14:05:30.229039 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:30.229056 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:30.229398 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:30.229421 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:30.229431 1060798 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:30.229439 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
	I0120 14:05:30.229692 1060798 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:30.229713 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:30.231477 1060798 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-553677 addons enable metrics-server
	
	I0120 14:05:30.233108 1060798 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:30.234556 1060798 addons.go:514] duration metric: took 3.752807641s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:31.446192 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:33.453220 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:31.868796 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:34.366219 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:35.447702 1060798 pod_ready.go:93] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.447735 1060798 pod_ready.go:82] duration metric: took 8.507515045s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.447745 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.453130 1060798 pod_ready.go:93] pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.453158 1060798 pod_ready.go:82] duration metric: took 5.406746ms for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.453169 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.457466 1060798 pod_ready.go:93] pod "kube-proxy-p5rcq" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.457492 1060798 pod_ready.go:82] duration metric: took 4.316578ms for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.457503 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.462012 1060798 pod_ready.go:93] pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:35.462036 1060798 pod_ready.go:82] duration metric: took 4.526901ms for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:35.462043 1060798 pod_ready.go:39] duration metric: took 8.61823381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:35.462058 1060798 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:35.462111 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:35.477958 1060798 api_server.go:72] duration metric: took 8.996279799s to wait for apiserver process to appear ...
	I0120 14:05:35.477993 1060798 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:35.478019 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
	I0120 14:05:35.483505 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 200:
	ok
	I0120 14:05:35.484660 1060798 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:35.484690 1060798 api_server.go:131] duration metric: took 6.687782ms to wait for apiserver health ...
	I0120 14:05:35.484701 1060798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:35.490073 1060798 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:35.490118 1060798 system_pods.go:61] "coredns-668d6bf9bc-6dk7s" [1bba3148-0210-42ef-b08e-753e16365e33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:35.490129 1060798 system_pods.go:61] "coredns-668d6bf9bc-88phd" [dfc4947e-a505-4337-99d3-156d86f7646c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:05:35.490137 1060798 system_pods.go:61] "etcd-embed-certs-553677" [c915afbe-8665-4fbf-bcae-802c3ca214dd] Running
	I0120 14:05:35.490143 1060798 system_pods.go:61] "kube-apiserver-embed-certs-553677" [d04063fb-d723-4a72-9024-0b6ceba0f09d] Running
	I0120 14:05:35.490149 1060798 system_pods.go:61] "kube-controller-manager-embed-certs-553677" [c6de6703-1533-4391-a67e-f2c2208ebafe] Running
	I0120 14:05:35.490153 1060798 system_pods.go:61] "kube-proxy-p5rcq" [3a9ddae1-ef67-4dd0-9c18-77e796c37d2a] Running
	I0120 14:05:35.490157 1060798 system_pods.go:61] "kube-scheduler-embed-certs-553677" [10c63c3f-0748-4af6-94fb-a0ca644d4c61] Running
	I0120 14:05:35.490164 1060798 system_pods.go:61] "metrics-server-f79f97bbb-b92sv" [f9b310a6-0d19-4084-aeae-ebe0a395d042] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:35.490170 1060798 system_pods.go:61] "storage-provisioner" [a6c0070e-1e3c-48af-80e3-1c3ca9163bf8] Running
	I0120 14:05:35.490179 1060798 system_pods.go:74] duration metric: took 5.471078ms to wait for pod list to return data ...
	I0120 14:05:35.490189 1060798 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:35.493453 1060798 default_sa.go:45] found service account: "default"
	I0120 14:05:35.493489 1060798 default_sa.go:55] duration metric: took 3.2839ms for default service account to be created ...
	I0120 14:05:35.493500 1060798 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:35.648514 1060798 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:36.368251 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:38.868623 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.870222 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.380035 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:45.867670 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:47.868766 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:50.366281 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:52.367402 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:54.866983 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:54.867021 1061268 pod_ready.go:82] duration metric: took 4m0.006587828s for pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace to be "Ready" ...
	E0120 14:05:54.867033 1061268 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:05:54.867044 1061268 pod_ready.go:39] duration metric: took 4m2.396402991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:54.867065 1061268 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:54.867111 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:54.867187 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:54.917788 1061268 cri.go:89] found id: "a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
	I0120 14:05:54.917828 1061268 cri.go:89] found id: "9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
	I0120 14:05:54.917834 1061268 cri.go:89] found id: ""
	I0120 14:05:54.917844 1061268 logs.go:282] 2 containers: [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3]
	I0120 14:05:54.917927 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:54.923337 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:54.929376 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:05:54.929471 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:54.984694 1061268 cri.go:89] found id: "02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
	I0120 14:05:54.984729 1061268 cri.go:89] found id: "0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
	I0120 14:05:54.984733 1061268 cri.go:89] found id: ""
	I0120 14:05:54.984750 1061268 logs.go:282] 2 containers: [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778]
	I0120 14:05:54.984816 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:54.990663 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:54.996383 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:05:54.996492 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:55.041873 1061268 cri.go:89] found id: "c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
	I0120 14:05:55.041908 1061268 cri.go:89] found id: "cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
	I0120 14:05:55.041914 1061268 cri.go:89] found id: ""
	I0120 14:05:55.041924 1061268 logs.go:282] 2 containers: [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9]
	I0120 14:05:55.042006 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.047779 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.052191 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:55.052295 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:55.102560 1061268 cri.go:89] found id: "09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
	I0120 14:05:55.102594 1061268 cri.go:89] found id: "d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
	I0120 14:05:55.102600 1061268 cri.go:89] found id: ""
	I0120 14:05:55.102610 1061268 logs.go:282] 2 containers: [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943]
	I0120 14:05:55.102682 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.108113 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.113558 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:55.113644 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:55.158692 1061268 cri.go:89] found id: "3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
	I0120 14:05:55.158724 1061268 cri.go:89] found id: "aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
	I0120 14:05:55.158729 1061268 cri.go:89] found id: ""
	I0120 14:05:55.158739 1061268 logs.go:282] 2 containers: [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33]
	I0120 14:05:55.158801 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.163830 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.168399 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:55.168475 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:55.224035 1061268 cri.go:89] found id: "c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
	I0120 14:05:55.224068 1061268 cri.go:89] found id: "025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
	I0120 14:05:55.224074 1061268 cri.go:89] found id: ""
	I0120 14:05:55.224085 1061268 logs.go:282] 2 containers: [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3]
	I0120 14:05:55.224158 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.228696 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.233948 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:55.234023 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:55.272989 1061268 cri.go:89] found id: ""
	I0120 14:05:55.273024 1061268 logs.go:282] 0 containers: []
	W0120 14:05:55.273033 1061268 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:55.273040 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:05:55.273108 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:05:55.320199 1061268 cri.go:89] found id: "192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
	I0120 14:05:55.320229 1061268 cri.go:89] found id: "f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
	I0120 14:05:55.320233 1061268 cri.go:89] found id: ""
	I0120 14:05:55.320242 1061268 logs.go:282] 2 containers: [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22]
	I0120 14:05:55.320295 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.325143 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.334774 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:55.334849 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:55.383085 1061268 cri.go:89] found id: "6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
	I0120 14:05:55.383121 1061268 cri.go:89] found id: ""
	I0120 14:05:55.383133 1061268 logs.go:282] 1 containers: [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39]
	I0120 14:05:55.383194 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:55.388216 1061268 logs.go:123] Gathering logs for kube-apiserver [9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3] ...
	I0120 14:05:55.388253 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
	I0120 14:05:55.446118 1061268 logs.go:123] Gathering logs for kube-controller-manager [025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3] ...
	I0120 14:05:55.446152 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
	I0120 14:05:55.502498 1061268 logs.go:123] Gathering logs for storage-provisioner [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8] ...
	I0120 14:05:55.502538 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
	I0120 14:05:55.548359 1061268 logs.go:123] Gathering logs for containerd ...
	I0120 14:05:55.548400 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:05:55.609421 1061268 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:55.609469 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:55.625660 1061268 logs.go:123] Gathering logs for etcd [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4] ...
	I0120 14:05:55.625702 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
	I0120 14:05:55.674797 1061268 logs.go:123] Gathering logs for kube-proxy [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802] ...
	I0120 14:05:55.674846 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
	I0120 14:05:55.715726 1061268 logs.go:123] Gathering logs for kube-proxy [aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33] ...
	I0120 14:05:55.715767 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
	I0120 14:05:55.755665 1061268 logs.go:123] Gathering logs for kube-controller-manager [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a] ...
	I0120 14:05:55.755700 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
	I0120 14:05:55.815422 1061268 logs.go:123] Gathering logs for storage-provisioner [f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22] ...
	I0120 14:05:55.815464 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
	I0120 14:05:55.858791 1061268 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:55.858825 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:55.937094 1061268 logs.go:123] Gathering logs for kube-apiserver [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42] ...
	I0120 14:05:55.937147 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
	I0120 14:05:55.991427 1061268 logs.go:123] Gathering logs for etcd [0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778] ...
	I0120 14:05:55.991470 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
	I0120 14:05:56.037962 1061268 logs.go:123] Gathering logs for coredns [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092] ...
	I0120 14:05:56.038001 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
	I0120 14:05:56.078966 1061268 logs.go:123] Gathering logs for coredns [cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9] ...
	I0120 14:05:56.079002 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
	I0120 14:05:56.123993 1061268 logs.go:123] Gathering logs for kube-scheduler [d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943] ...
	I0120 14:05:56.124028 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
	I0120 14:05:56.174816 1061268 logs.go:123] Gathering logs for container status ...
	I0120 14:05:56.174864 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:56.227944 1061268 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:56.227981 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:05:56.373827 1061268 logs.go:123] Gathering logs for kube-scheduler [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29] ...
	I0120 14:05:56.373869 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
	I0120 14:05:56.419064 1061268 logs.go:123] Gathering logs for kubernetes-dashboard [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39] ...
	I0120 14:05:56.419105 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
	I0120 14:05:58.964349 1061268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:58.982111 1061268 api_server.go:72] duration metric: took 4m11.799712602s to wait for apiserver process to appear ...
	I0120 14:05:58.982153 1061268 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:58.982207 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:58.982267 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:59.022764 1061268 cri.go:89] found id: "a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
	I0120 14:05:59.022791 1061268 cri.go:89] found id: "9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
	I0120 14:05:59.022795 1061268 cri.go:89] found id: ""
	I0120 14:05:59.022802 1061268 logs.go:282] 2 containers: [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3]
	I0120 14:05:59.022867 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.028807 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.035066 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:05:59.035164 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:59.081381 1061268 cri.go:89] found id: "02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
	I0120 14:05:59.081414 1061268 cri.go:89] found id: "0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
	I0120 14:05:59.081420 1061268 cri.go:89] found id: ""
	I0120 14:05:59.081431 1061268 logs.go:282] 2 containers: [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778]
	I0120 14:05:59.081503 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.086586 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.090923 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:05:59.091001 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:59.129195 1061268 cri.go:89] found id: "c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
	I0120 14:05:59.129229 1061268 cri.go:89] found id: "cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
	I0120 14:05:59.129235 1061268 cri.go:89] found id: ""
	I0120 14:05:59.129245 1061268 logs.go:282] 2 containers: [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9]
	I0120 14:05:59.129310 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.134230 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.139242 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:59.139365 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:59.180849 1061268 cri.go:89] found id: "09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
	I0120 14:05:59.180884 1061268 cri.go:89] found id: "d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
	I0120 14:05:59.180888 1061268 cri.go:89] found id: ""
	I0120 14:05:59.180898 1061268 logs.go:282] 2 containers: [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943]
	I0120 14:05:59.180991 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.185950 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.190730 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:59.190818 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:59.232733 1061268 cri.go:89] found id: "3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
	I0120 14:05:59.232774 1061268 cri.go:89] found id: "aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
	I0120 14:05:59.232780 1061268 cri.go:89] found id: ""
	I0120 14:05:59.232790 1061268 logs.go:282] 2 containers: [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33]
	I0120 14:05:59.232861 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.238473 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.243105 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:59.243188 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:59.282102 1061268 cri.go:89] found id: "c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
	I0120 14:05:59.282132 1061268 cri.go:89] found id: "025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
	I0120 14:05:59.282137 1061268 cri.go:89] found id: ""
	I0120 14:05:59.282147 1061268 logs.go:282] 2 containers: [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3]
	I0120 14:05:59.282231 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.286964 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.291689 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:59.291770 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:59.335494 1061268 cri.go:89] found id: ""
	I0120 14:05:59.335532 1061268 logs.go:282] 0 containers: []
	W0120 14:05:59.335542 1061268 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:59.335550 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:59.335622 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:59.382200 1061268 cri.go:89] found id: "6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
	I0120 14:05:59.382235 1061268 cri.go:89] found id: ""
	I0120 14:05:59.382245 1061268 logs.go:282] 1 containers: [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39]
	I0120 14:05:59.382303 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.387107 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:05:59.387204 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:05:59.425237 1061268 cri.go:89] found id: "192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
	I0120 14:05:59.425271 1061268 cri.go:89] found id: "f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
	I0120 14:05:59.425277 1061268 cri.go:89] found id: ""
	I0120 14:05:59.425286 1061268 logs.go:282] 2 containers: [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22]
	I0120 14:05:59.425364 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.430391 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:05:59.435125 1061268 logs.go:123] Gathering logs for kube-controller-manager [025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3] ...
	I0120 14:05:59.435168 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
	I0120 14:05:59.489718 1061268 logs.go:123] Gathering logs for storage-provisioner [f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22] ...
	I0120 14:05:59.489762 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
	I0120 14:05:59.536425 1061268 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:59.536471 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:59.555049 1061268 logs.go:123] Gathering logs for kube-proxy [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802] ...
	I0120 14:05:59.555087 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
	I0120 14:05:59.597084 1061268 logs.go:123] Gathering logs for kube-proxy [aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33] ...
	I0120 14:05:59.597125 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
	I0120 14:05:59.638067 1061268 logs.go:123] Gathering logs for kube-controller-manager [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a] ...
	I0120 14:05:59.638100 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
	I0120 14:05:59.706228 1061268 logs.go:123] Gathering logs for kube-apiserver [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42] ...
	I0120 14:05:59.706274 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
	I0120 14:05:59.753770 1061268 logs.go:123] Gathering logs for kube-apiserver [9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3] ...
	I0120 14:05:59.753834 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
	I0120 14:05:59.806616 1061268 logs.go:123] Gathering logs for etcd [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4] ...
	I0120 14:05:59.806661 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
	I0120 14:05:59.855127 1061268 logs.go:123] Gathering logs for containerd ...
	I0120 14:05:59.855170 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:05:59.912684 1061268 logs.go:123] Gathering logs for kube-scheduler [d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943] ...
	I0120 14:05:59.912740 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
	I0120 14:05:59.961054 1061268 logs.go:123] Gathering logs for kubernetes-dashboard [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39] ...
	I0120 14:05:59.961101 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
	I0120 14:05:59.999981 1061268 logs.go:123] Gathering logs for storage-provisioner [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8] ...
	I0120 14:06:00.000018 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
	I0120 14:06:00.043176 1061268 logs.go:123] Gathering logs for container status ...
	I0120 14:06:00.043224 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:06:00.088503 1061268 logs.go:123] Gathering logs for kubelet ...
	I0120 14:06:00.088544 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:06:00.165437 1061268 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:06:00.165486 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:06:00.295533 1061268 logs.go:123] Gathering logs for etcd [0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778] ...
	I0120 14:06:00.295579 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
	I0120 14:06:00.357211 1061268 logs.go:123] Gathering logs for kube-scheduler [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29] ...
	I0120 14:06:00.357243 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
	I0120 14:06:00.405816 1061268 logs.go:123] Gathering logs for coredns [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092] ...
	I0120 14:06:00.405851 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
	I0120 14:06:00.448633 1061268 logs.go:123] Gathering logs for coredns [cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9] ...
	I0120 14:06:00.448668 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
	I0120 14:06:02.993693 1061268 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8444/healthz ...
	I0120 14:06:03.000837 1061268 api_server.go:279] https://192.168.39.158:8444/healthz returned 200:
	ok
	I0120 14:06:03.002153 1061268 api_server.go:141] control plane version: v1.32.0
	I0120 14:06:03.002197 1061268 api_server.go:131] duration metric: took 4.020033778s to wait for apiserver health ...
	I0120 14:06:03.002209 1061268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:06:03.002251 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:06:03.002366 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:06:03.042946 1061268 cri.go:89] found id: "a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
	I0120 14:06:03.042976 1061268 cri.go:89] found id: "9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
	I0120 14:06:03.042982 1061268 cri.go:89] found id: ""
	I0120 14:06:03.042992 1061268 logs.go:282] 2 containers: [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3]
	I0120 14:06:03.043060 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.048245 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.054072 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:06:03.054163 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:06:03.095236 1061268 cri.go:89] found id: "02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
	I0120 14:06:03.095267 1061268 cri.go:89] found id: "0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
	I0120 14:06:03.095273 1061268 cri.go:89] found id: ""
	I0120 14:06:03.095283 1061268 logs.go:282] 2 containers: [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778]
	I0120 14:06:03.095356 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.101394 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.106404 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:06:03.106491 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:06:03.147747 1061268 cri.go:89] found id: "c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
	I0120 14:06:03.147777 1061268 cri.go:89] found id: "cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
	I0120 14:06:03.147784 1061268 cri.go:89] found id: ""
	I0120 14:06:03.147794 1061268 logs.go:282] 2 containers: [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9]
	I0120 14:06:03.147859 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.153519 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.158247 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:06:03.158333 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:06:03.197681 1061268 cri.go:89] found id: "09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
	I0120 14:06:03.197714 1061268 cri.go:89] found id: "d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
	I0120 14:06:03.197721 1061268 cri.go:89] found id: ""
	I0120 14:06:03.197731 1061268 logs.go:282] 2 containers: [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943]
	I0120 14:06:03.197798 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.204003 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.208671 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:06:03.208757 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:06:03.256457 1061268 cri.go:89] found id: "3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
	I0120 14:06:03.256487 1061268 cri.go:89] found id: "aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
	I0120 14:06:03.256491 1061268 cri.go:89] found id: ""
	I0120 14:06:03.256499 1061268 logs.go:282] 2 containers: [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33]
	I0120 14:06:03.256549 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.262961 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.268145 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:06:03.268221 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:06:03.312818 1061268 cri.go:89] found id: "c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
	I0120 14:06:03.312847 1061268 cri.go:89] found id: "025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
	I0120 14:06:03.312851 1061268 cri.go:89] found id: ""
	I0120 14:06:03.312859 1061268 logs.go:282] 2 containers: [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3]
	I0120 14:06:03.312920 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.318436 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.323982 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:06:03.324066 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:06:03.380746 1061268 cri.go:89] found id: ""
	I0120 14:06:03.380779 1061268 logs.go:282] 0 containers: []
	W0120 14:06:03.380787 1061268 logs.go:284] No container was found matching "kindnet"
	I0120 14:06:03.380794 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:06:03.380858 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:06:03.429155 1061268 cri.go:89] found id: "6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
	I0120 14:06:03.429183 1061268 cri.go:89] found id: ""
	I0120 14:06:03.429193 1061268 logs.go:282] 1 containers: [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39]
	I0120 14:06:03.429264 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.434046 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:06:03.434129 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:06:03.478490 1061268 cri.go:89] found id: "192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
	I0120 14:06:03.478519 1061268 cri.go:89] found id: "f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
	I0120 14:06:03.478523 1061268 cri.go:89] found id: ""
	I0120 14:06:03.478531 1061268 logs.go:282] 2 containers: [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22]
	I0120 14:06:03.478587 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.483366 1061268 ssh_runner.go:195] Run: which crictl
	I0120 14:06:03.488091 1061268 logs.go:123] Gathering logs for etcd [0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778] ...
	I0120 14:06:03.488125 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
	I0120 14:06:03.537772 1061268 logs.go:123] Gathering logs for coredns [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092] ...
	I0120 14:06:03.537823 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
	I0120 14:06:03.584100 1061268 logs.go:123] Gathering logs for kube-controller-manager [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a] ...
	I0120 14:06:03.584134 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
	I0120 14:06:03.646671 1061268 logs.go:123] Gathering logs for container status ...
	I0120 14:06:03.646723 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:06:03.706076 1061268 logs.go:123] Gathering logs for coredns [cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9] ...
	I0120 14:06:03.706119 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
	I0120 14:06:03.745730 1061268 logs.go:123] Gathering logs for kube-scheduler [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29] ...
	I0120 14:06:03.745775 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
	I0120 14:06:03.786902 1061268 logs.go:123] Gathering logs for kube-proxy [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802] ...
	I0120 14:06:03.786940 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
	I0120 14:06:03.830070 1061268 logs.go:123] Gathering logs for storage-provisioner [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8] ...
	I0120 14:06:03.830115 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
	I0120 14:06:03.874536 1061268 logs.go:123] Gathering logs for storage-provisioner [f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22] ...
	I0120 14:06:03.874594 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
	I0120 14:06:03.915750 1061268 logs.go:123] Gathering logs for kube-proxy [aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33] ...
	I0120 14:06:03.915784 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
	I0120 14:06:03.956123 1061268 logs.go:123] Gathering logs for kube-controller-manager [025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3] ...
	I0120 14:06:03.956162 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
	I0120 14:06:04.016008 1061268 logs.go:123] Gathering logs for kubernetes-dashboard [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39] ...
	I0120 14:06:04.016059 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
	I0120 14:06:04.060273 1061268 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:06:04.060312 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:06:04.188515 1061268 logs.go:123] Gathering logs for kube-apiserver [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42] ...
	I0120 14:06:04.188571 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
	I0120 14:06:04.236379 1061268 logs.go:123] Gathering logs for kube-apiserver [9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3] ...
	I0120 14:06:04.236416 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
	I0120 14:06:04.290511 1061268 logs.go:123] Gathering logs for etcd [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4] ...
	I0120 14:06:04.290552 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
	I0120 14:06:04.344991 1061268 logs.go:123] Gathering logs for kube-scheduler [d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943] ...
	I0120 14:06:04.345034 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
	I0120 14:06:04.409146 1061268 logs.go:123] Gathering logs for containerd ...
	I0120 14:06:04.409193 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:06:04.469681 1061268 logs.go:123] Gathering logs for kubelet ...
	I0120 14:06:04.469730 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:06:04.551443 1061268 logs.go:123] Gathering logs for dmesg ...
	I0120 14:06:04.551486 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:06:07.075113 1061268 system_pods.go:59] 8 kube-system pods found
	I0120 14:06:07.075149 1061268 system_pods.go:61] "coredns-668d6bf9bc-j4tcz" [ec868aad-83ba-424b-9c45-f01cb97dbf5c] Running
	I0120 14:06:07.075154 1061268 system_pods.go:61] "etcd-default-k8s-diff-port-901416" [4b431891-d618-45f1-9818-02abb09dc774] Running
	I0120 14:06:07.075161 1061268 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901416" [2aa81ce3-8c3f-454a-aa5d-ad52e56f16b6] Running
	I0120 14:06:07.075164 1061268 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901416" [f937feab-0776-4a1e-8a99-659250ad2bfb] Running
	I0120 14:06:07.075167 1061268 system_pods.go:61] "kube-proxy-6v2v7" [53d00002-be0a-4f71-97d2-607e482c5bfd] Running
	I0120 14:06:07.075170 1061268 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901416" [525530a5-8789-4916-967a-6e976e91ccb3] Running
	I0120 14:06:07.075177 1061268 system_pods.go:61] "metrics-server-f79f97bbb-nfwzt" [ba691a4d-ec1c-4929-ab0e-58fb2e485165] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:06:07.075181 1061268 system_pods.go:61] "storage-provisioner" [6d5f71d2-7d29-4a8b-ad69-8ad65b9565f6] Running
	I0120 14:06:07.075189 1061268 system_pods.go:74] duration metric: took 4.072972909s to wait for pod list to return data ...
	I0120 14:06:07.075199 1061268 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:06:07.077984 1061268 default_sa.go:45] found service account: "default"
	I0120 14:06:07.078010 1061268 default_sa.go:55] duration metric: took 2.803991ms for default service account to be created ...
	I0120 14:06:07.078018 1061268 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:06:07.082687 1061268 system_pods.go:87] 8 kube-system pods found
	I0120 14:06:07.086241 1061268 system_pods.go:105] "coredns-668d6bf9bc-j4tcz" [ec868aad-83ba-424b-9c45-f01cb97dbf5c] Running
	I0120 14:06:07.086270 1061268 system_pods.go:105] "etcd-default-k8s-diff-port-901416" [4b431891-d618-45f1-9818-02abb09dc774] Running
	I0120 14:06:07.086279 1061268 system_pods.go:105] "kube-apiserver-default-k8s-diff-port-901416" [2aa81ce3-8c3f-454a-aa5d-ad52e56f16b6] Running
	I0120 14:06:07.086287 1061268 system_pods.go:105] "kube-controller-manager-default-k8s-diff-port-901416" [f937feab-0776-4a1e-8a99-659250ad2bfb] Running
	I0120 14:06:07.086293 1061268 system_pods.go:105] "kube-proxy-6v2v7" [53d00002-be0a-4f71-97d2-607e482c5bfd] Running
	I0120 14:06:07.086299 1061268 system_pods.go:105] "kube-scheduler-default-k8s-diff-port-901416" [525530a5-8789-4916-967a-6e976e91ccb3] Running
	I0120 14:06:07.086312 1061268 system_pods.go:105] "metrics-server-f79f97bbb-nfwzt" [ba691a4d-ec1c-4929-ab0e-58fb2e485165] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:06:07.086321 1061268 system_pods.go:105] "storage-provisioner" [6d5f71d2-7d29-4a8b-ad69-8ad65b9565f6] Running
	I0120 14:06:07.086334 1061268 system_pods.go:147] duration metric: took 8.307949ms to wait for k8s-apps to be running ...
	I0120 14:06:07.086345 1061268 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 14:06:07.086398 1061268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:06:07.103417 1061268 system_svc.go:56] duration metric: took 17.063515ms WaitForService to wait for kubelet
	I0120 14:06:07.103451 1061268 kubeadm.go:582] duration metric: took 4m19.921060894s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:06:07.103481 1061268 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:06:07.107665 1061268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:06:07.107690 1061268 node_conditions.go:123] node cpu capacity is 2
	I0120 14:06:07.107704 1061268 node_conditions.go:105] duration metric: took 4.218612ms to run NodePressure ...
	I0120 14:06:07.107717 1061268 start.go:241] waiting for startup goroutines ...
	I0120 14:06:07.107724 1061268 start.go:246] waiting for cluster config update ...
	I0120 14:06:07.107735 1061268 start.go:255] writing updated cluster config ...
	I0120 14:06:07.108022 1061268 ssh_runner.go:195] Run: rm -f paused
	I0120 14:06:07.161569 1061268 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 14:06:07.163860 1061268 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901416" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	90e122ccfe167       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   9                   07210fc27dc62       dashboard-metrics-scraper-86c6bf9756-7w7lb
	aa71aaebc59f9       07655ddf2eebe       21 minutes ago       Running             kubernetes-dashboard        0                   35104563f83fe       kubernetes-dashboard-7779f9b69b-vcbk9
	f3a40f7f95672       6e38f40d628db       21 minutes ago       Running             storage-provisioner         0                   1c74e8c1fdafc       storage-provisioner
	7bc12f446e72d       c69fa2e9cbf5f       21 minutes ago       Running             coredns                     0                   1f79084cba5c8       coredns-668d6bf9bc-6dk7s
	4d1a4fdda2e14       c69fa2e9cbf5f       21 minutes ago       Running             coredns                     0                   c2c50aa0c057b       coredns-668d6bf9bc-88phd
	064833c57608a       040f9f8aac8cd       21 minutes ago       Running             kube-proxy                  0                   d2267e69e323c       kube-proxy-p5rcq
	050793e2ff918       8cab3d2a8bd0f       21 minutes ago       Running             kube-controller-manager     2                   16f3ec6463e28       kube-controller-manager-embed-certs-553677
	5af45fd19b3a6       c2e17b8d0f4a3       21 minutes ago       Running             kube-apiserver              2                   edb37c69c017c       kube-apiserver-embed-certs-553677
	f3a74e677451d       a389e107f4ff1       21 minutes ago       Running             kube-scheduler              2                   a77dd60ee5de8       kube-scheduler-embed-certs-553677
	538390e842743       a9e7e6b294baf       21 minutes ago       Running             etcd                        2                   079cf5b17f8a6       etcd-embed-certs-553677
	
	
	==> containerd <==
	Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.534294224Z" level=info msg="CreateContainer within sandbox \"07210fc27dc628b6dc419419431fe45b47634a6e12a353edf43b67d2cdb1da85\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\""
	Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.535518905Z" level=info msg="StartContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\""
	Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.617797496Z" level=info msg="StartContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\" returns successfully"
	Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.690252358Z" level=info msg="shim disconnected" id=49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04 namespace=k8s.io
	Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.690381193Z" level=warning msg="cleaning up after shim disconnected" id=49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04 namespace=k8s.io
	Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.690433839Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 14:21:05 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:05.084637273Z" level=info msg="RemoveContainer for \"f1c4781239f0e7cc966e2d446499da901e04b88d09396170fcbfad1da9597285\""
	Jan 20 14:21:05 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:05.093136508Z" level=info msg="RemoveContainer for \"f1c4781239f0e7cc966e2d446499da901e04b88d09396170fcbfad1da9597285\" returns successfully"
	Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.506463132Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.538989240Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.541541272Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.541662977Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.508261940Z" level=info msg="CreateContainer within sandbox \"07210fc27dc628b6dc419419431fe45b47634a6e12a353edf43b67d2cdb1da85\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.556225963Z" level=info msg="CreateContainer within sandbox \"07210fc27dc628b6dc419419431fe45b47634a6e12a353edf43b67d2cdb1da85\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5\""
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.565617835Z" level=info msg="StartContainer for \"90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5\""
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.697109852Z" level=info msg="StartContainer for \"90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5\" returns successfully"
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.750723833Z" level=info msg="shim disconnected" id=90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5 namespace=k8s.io
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.750796519Z" level=warning msg="cleaning up after shim disconnected" id=90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5 namespace=k8s.io
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.750806552Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.831955930Z" level=info msg="RemoveContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\""
	Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.838650816Z" level=info msg="RemoveContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\" returns successfully"
	Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.504719521Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.527169274Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.529828576Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.529856350Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [4d1a4fdda2e1453c6a2cbe67869cc5361f63a5d8d0849b836d4ef4563b425223] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [7bc12f446e72dcf6c0cc56dea29b424f21c189627cd06fef036baabc8bfd7896] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-553677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-553677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=embed-certs-553677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T14_05_23_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 14:05:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-553677
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 14:27:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:05:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.136
	  Hostname:    embed-certs-553677
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 afa001d7b3024a01a82fa78feaf4cee9
	  System UUID:                afa001d7-b302-4a01-a82f-a78feaf4cee9
	  Boot ID:                    3d5c3b4b-1f08-4d28-840b-a8710e76bcea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-6dk7s                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-88phd                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-553677                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-553677             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-553677    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-p5rcq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-553677             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-b92sv                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-7w7lb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-vcbk9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-553677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-553677 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-553677 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-553677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-553677 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-553677 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-553677 event: Registered Node embed-certs-553677 in Controller
	
	
	==> dmesg <==
	[  +0.053332] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042725] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.047663] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.023557] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.720652] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.404676] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.066653] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069864] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +0.199782] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
	[  +0.171364] systemd-fstab-generator[526]: Ignoring "noauto" option for root device
	[  +0.356820] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +1.642366] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +2.261960] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.294436] kauditd_printk_skb: 217 callbacks suppressed
	[  +5.464773] kauditd_printk_skb: 38 callbacks suppressed
	[Jan20 14:01] kauditd_printk_skb: 91 callbacks suppressed
	[Jan20 14:05] systemd-fstab-generator[3057]: Ignoring "noauto" option for root device
	[  +1.718830] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.888402] systemd-fstab-generator[3421]: Ignoring "noauto" option for root device
	[  +4.442390] systemd-fstab-generator[3512]: Ignoring "noauto" option for root device
	[  +0.686341] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.733071] kauditd_printk_skb: 90 callbacks suppressed
	[  +5.501271] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [538390e8427430eed2b6e4bf3b12641221cd77efecc9084f454604fecfbeb222] <==
	{"level":"info","ts":"2025-01-20T14:05:16.916542Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-20T14:05:16.919019Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T14:05:16.919830Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T14:05:16.920661Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T14:05:40.309184Z","caller":"traceutil/trace.go:171","msg":"trace[275623798] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"528.759205ms","start":"2025-01-20T14:05:39.778403Z","end":"2025-01-20T14:05:40.307162Z","steps":["trace[275623798] 'process raft request'  (duration: 528.641002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:40.321410Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T14:05:39.778376Z","time spent":"533.123963ms","remote":"127.0.0.1:44246","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q327nomwnwzpte6jv5e2j5y5c4\" mod_revision:433 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q327nomwnwzpte6jv5e2j5y5c4\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q327nomwnwzpte6jv5e2j5y5c4\" > >"}
	{"level":"info","ts":"2025-01-20T14:05:40.378602Z","caller":"traceutil/trace.go:171","msg":"trace[1052250938] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"597.260719ms","start":"2025-01-20T14:05:39.781325Z","end":"2025-01-20T14:05:40.378586Z","steps":["trace[1052250938] 'process raft request'  (duration: 593.417016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:40.378775Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T14:05:39.781282Z","time spent":"597.435761ms","remote":"127.0.0.1:44132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:534 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-20T14:05:40.379928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"558.853213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:05:40.380725Z","caller":"traceutil/trace.go:171","msg":"trace[1909386249] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:538; }","duration":"559.681584ms","start":"2025-01-20T14:05:39.821033Z","end":"2025-01-20T14:05:40.380715Z","steps":["trace[1909386249] 'agreement among raft nodes before linearized reading'  (duration: 558.748392ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:40.381391Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T14:05:39.821017Z","time spent":"560.354765ms","remote":"127.0.0.1:44156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-20T14:05:40.379852Z","caller":"traceutil/trace.go:171","msg":"trace[384467576] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"557.218788ms","start":"2025-01-20T14:05:39.821079Z","end":"2025-01-20T14:05:40.378298Z","steps":["trace[384467576] 'read index received'  (duration: 486.477234ms)","trace[384467576] 'applied index is now lower than readState.Index'  (duration: 70.741009ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:05:40.382044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.910973ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:05:40.382086Z","caller":"traceutil/trace.go:171","msg":"trace[128622718] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:538; }","duration":"413.996684ms","start":"2025-01-20T14:05:39.968082Z","end":"2025-01-20T14:05:40.382079Z","steps":["trace[128622718] 'agreement among raft nodes before linearized reading'  (duration: 413.897921ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:40.382378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.942541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:05:40.382471Z","caller":"traceutil/trace.go:171","msg":"trace[1862852755] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:538; }","duration":"167.070504ms","start":"2025-01-20T14:05:40.215394Z","end":"2025-01-20T14:05:40.382464Z","steps":["trace[1862852755] 'agreement among raft nodes before linearized reading'  (duration: 166.96143ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:15:17.599614Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2025-01-20T14:15:17.644978Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":865,"took":"44.248618ms","hash":3046015032,"current-db-size-bytes":2932736,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2932736,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-20T14:15:17.645112Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3046015032,"revision":865,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T14:20:17.608243Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1116}
	{"level":"info","ts":"2025-01-20T14:20:17.613512Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1116,"took":"4.438362ms","hash":3116006083,"current-db-size-bytes":2932736,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1769472,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T14:20:17.613817Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3116006083,"revision":1116,"compact-revision":865}
	{"level":"info","ts":"2025-01-20T14:25:17.616521Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1368}
	{"level":"info","ts":"2025-01-20T14:25:17.622018Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1368,"took":"4.396777ms","hash":763323634,"current-db-size-bytes":2932736,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T14:25:17.622098Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":763323634,"revision":1368,"compact-revision":1116}
	
	
	==> kernel <==
	 14:27:10 up 26 min,  0 users,  load average: 0.02, 0.16, 0.17
	Linux embed-certs-553677 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5af45fd19b3a6433b0b011d77366c31a3a8c61d3527622e19f52c945a44ed255] <==
	 > logger="UnhandledError"
	I0120 14:23:20.441545       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:25:19.440651       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:19.441001       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:25:20.443314       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:20.443676       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 14:25:20.443960       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:20.444271       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 14:25:20.445117       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:25:20.446335       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:26:20.446155       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:26:20.446512       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 14:26:20.446647       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:26:20.446799       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 14:26:20.448322       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:26:20.448368       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [050793e2ff9184684b006b118ddbf73bfbbb3def7f332f79e31a733a246e93a7] <==
	I0120 14:22:03.521818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="198.307µs"
	E0120 14:22:26.187306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:22:26.301152       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:22:56.194463       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:22:56.309804       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:23:26.202042       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:26.318172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:23:56.208710       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:56.331786       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:26.217184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:26.340018       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:56.224512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:56.351505       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:26.231136       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:26.362047       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:56.237527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:56.377674       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:26:06.851029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="372.17µs"
	I0120 14:26:10.535147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="102.298µs"
	E0120 14:26:26.245147       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:26:26.385459       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:26:27.587010       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-553677"
	E0120 14:26:56.252342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:26:56.395195       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:27:03.527481       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="155.198µs"
	
	
	==> kube-proxy [064833c57608a9b9181fcc6a9d9b35b48ac3129395f162eeb3fbcbd8d61ab67e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 14:05:27.931307       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 14:05:27.957437       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.136"]
	E0120 14:05:27.957527       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 14:05:28.174669       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 14:05:28.175283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 14:05:28.175539       1 server_linux.go:170] "Using iptables Proxier"
	I0120 14:05:28.195706       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 14:05:28.204551       1 server.go:497] "Version info" version="v1.32.0"
	I0120 14:05:28.209403       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 14:05:28.214955       1 config.go:199] "Starting service config controller"
	I0120 14:05:28.214995       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 14:05:28.215023       1 config.go:105] "Starting endpoint slice config controller"
	I0120 14:05:28.215027       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 14:05:28.215788       1 config.go:329] "Starting node config controller"
	I0120 14:05:28.215796       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 14:05:28.315699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 14:05:28.315788       1 shared_informer.go:320] Caches are synced for service config
	I0120 14:05:28.316146       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f3a74e677451d5bed228f9f6297ebbd6bf5ab847fc34d9d171f66744d92aa03e] <==
	W0120 14:05:20.380410       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:20.380696       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.413135       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 14:05:20.413440       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.464122       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 14:05:20.464431       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.509303       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 14:05:20.509593       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.575187       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:20.575225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.669393       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 14:05:20.669849       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.730081       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 14:05:20.730184       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.801186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:20.801666       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.806171       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:20.806483       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.810646       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 14:05:20.811053       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.859553       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 14:05:20.862180       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:20.879238       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 14:05:20.881971       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0120 14:05:22.749121       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 14:26:06 embed-certs-553677 kubelet[3428]: I0120 14:26:06.830426    3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
	Jan 20 14:26:06 embed-certs-553677 kubelet[3428]: E0120 14:26:06.830598    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
	Jan 20 14:26:10 embed-certs-553677 kubelet[3428]: I0120 14:26:10.515570    3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
	Jan 20 14:26:10 embed-certs-553677 kubelet[3428]: E0120 14:26:10.515762    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
	Jan 20 14:26:13 embed-certs-553677 kubelet[3428]: E0120 14:26:13.504371    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: I0120 14:26:22.504394    3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: E0120 14:26:22.504605    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: E0120 14:26:22.552421    3428 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 14:26:22 embed-certs-553677 kubelet[3428]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 14:26:24 embed-certs-553677 kubelet[3428]: E0120 14:26:24.504652    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
	Jan 20 14:26:33 embed-certs-553677 kubelet[3428]: I0120 14:26:33.503088    3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
	Jan 20 14:26:33 embed-certs-553677 kubelet[3428]: E0120 14:26:33.503288    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
	Jan 20 14:26:36 embed-certs-553677 kubelet[3428]: E0120 14:26:36.504700    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
	Jan 20 14:26:47 embed-certs-553677 kubelet[3428]: I0120 14:26:47.504139    3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
	Jan 20 14:26:47 embed-certs-553677 kubelet[3428]: E0120 14:26:47.504304    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
	Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.530250    3428 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.530629    3428 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.531084    3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt4r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-b92sv_kube-system(f9b310a6-0d19-4084-aeae-ebe0a395d042): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.532666    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
	Jan 20 14:27:02 embed-certs-553677 kubelet[3428]: I0120 14:27:02.503787    3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
	Jan 20 14:27:02 embed-certs-553677 kubelet[3428]: E0120 14:27:02.504083    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
	Jan 20 14:27:03 embed-certs-553677 kubelet[3428]: E0120 14:27:03.504724    3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
	
	
	==> kubernetes-dashboard [aa71aaebc59f9590fdc60ff9497fc4fc81c29c6979fd8605e7cc5aebe6bb547c] <==
	2025/01/20 14:15:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:26:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:26:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:27:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [f3a40f7f9567275383954868f26b5113e242695b1aa9fc8ba6ba3fdba97915c9] <==
	I0120 14:05:29.582932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 14:05:29.626473       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 14:05:29.626834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 14:05:29.644671       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 14:05:29.645594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-553677_a045e4b7-35b0-4b64-a3d1-5f501c904876!
	I0120 14:05:29.658176       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1be817fd-bc8a-4df2-9610-54e186f604de", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-553677_a045e4b7-35b0-4b64-a3d1-5f501c904876 became leader
	I0120 14:05:29.747944       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-553677_a045e4b7-35b0-4b64-a3d1-5f501c904876!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-553677 -n embed-certs-553677
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-553677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-b92sv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-553677 describe pod metrics-server-f79f97bbb-b92sv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-553677 describe pod metrics-server-f79f97bbb-b92sv: exit status 1 (70.82065ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-b92sv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-553677 describe pod metrics-server-f79f97bbb-b92sv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1622.74s)

                                                
                                    

Test pass (285/324)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.22
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 12.66
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.07
18 TestDownloadOnly/v1.32.0/DeleteAll 0.15
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.64
22 TestOffline 64.05
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 213.17
29 TestAddons/serial/Volcano 42.87
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
35 TestAddons/parallel/Registry 15.53
36 TestAddons/parallel/Ingress 23.25
37 TestAddons/parallel/InspektorGadget 10.91
38 TestAddons/parallel/MetricsServer 6.96
40 TestAddons/parallel/CSI 55.84
41 TestAddons/parallel/Headlamp 19.97
42 TestAddons/parallel/CloudSpanner 6.88
43 TestAddons/parallel/LocalPath 56.31
44 TestAddons/parallel/NvidiaDevicePlugin 6.54
45 TestAddons/parallel/Yakd 11.93
47 TestAddons/StoppedEnableDisable 91.32
48 TestCertOptions 68.79
49 TestCertExpiration 330.81
51 TestForceSystemdFlag 72.11
52 TestForceSystemdEnv 97.96
54 TestKVMDriverInstallOrUpdate 5.09
58 TestErrorSpam/setup 44.9
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.79
61 TestErrorSpam/pause 1.76
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 7.16
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 61.42
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 45.15
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.18
75 TestFunctional/serial/CacheCmd/cache/add_local 2.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 45.04
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.53
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 3.96
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 27.59
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.84
97 TestFunctional/parallel/ServiceCmdConnect 10.55
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 44.44
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 29.82
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.53
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
113 TestFunctional/parallel/License 0.65
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.56
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
120 TestFunctional/parallel/ImageCommands/ImageBuild 5.2
121 TestFunctional/parallel/ImageCommands/Setup 1.85
131 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.06
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
140 TestFunctional/parallel/ProfileCmd/profile_list 0.35
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
142 TestFunctional/parallel/MountCmd/any-port 10.85
143 TestFunctional/parallel/ServiceCmd/List 0.37
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
149 TestFunctional/parallel/ServiceCmd/Format 0.75
150 TestFunctional/parallel/ServiceCmd/URL 0.36
151 TestFunctional/parallel/MountCmd/specific-port 1.67
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 195.77
160 TestMultiControlPlane/serial/DeployApp 6.39
161 TestMultiControlPlane/serial/PingHostFromPods 1.29
162 TestMultiControlPlane/serial/AddWorkerNode 58.17
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
165 TestMultiControlPlane/serial/CopyFile 13.86
166 TestMultiControlPlane/serial/StopSecondaryNode 91.72
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
168 TestMultiControlPlane/serial/RestartSecondaryNode 46.83
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 486.93
171 TestMultiControlPlane/serial/DeleteSecondaryNode 7.39
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
173 TestMultiControlPlane/serial/StopCluster 273.08
174 TestMultiControlPlane/serial/RestartCluster 133.15
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
176 TestMultiControlPlane/serial/AddSecondaryNode 77.16
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
181 TestJSONOutput/start/Command 60.37
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.36
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 93.6
213 TestMountStart/serial/StartWithMountFirst 28.72
214 TestMountStart/serial/VerifyMountFirst 0.41
215 TestMountStart/serial/StartWithMountSecond 29.06
216 TestMountStart/serial/VerifyMountSecond 0.41
217 TestMountStart/serial/DeleteFirst 0.72
218 TestMountStart/serial/VerifyMountPostDelete 0.41
219 TestMountStart/serial/Stop 1.33
220 TestMountStart/serial/RestartStopped 24.41
221 TestMountStart/serial/VerifyMountPostStop 0.42
224 TestMultiNode/serial/FreshStart2Nodes 123.46
225 TestMultiNode/serial/DeployApp2Nodes 5.29
226 TestMultiNode/serial/PingHostFrom2Pods 0.84
227 TestMultiNode/serial/AddNode 53.39
228 TestMultiNode/serial/MultiNodeLabels 0.07
229 TestMultiNode/serial/ProfileList 0.62
230 TestMultiNode/serial/CopyFile 7.7
231 TestMultiNode/serial/StopNode 2.38
232 TestMultiNode/serial/StartAfterStop 35.7
233 TestMultiNode/serial/RestartKeepsNodes 316.11
234 TestMultiNode/serial/DeleteNode 2.32
235 TestMultiNode/serial/StopMultiNode 181.96
236 TestMultiNode/serial/RestartMultiNode 106.85
237 TestMultiNode/serial/ValidateNameConflict 45.67
242 TestPreload 159.66
244 TestScheduledStopUnix 115.04
248 TestRunningBinaryUpgrade 180.57
250 TestKubernetesUpgrade 148.14
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
254 TestNoKubernetes/serial/StartWithK8s 122.8
262 TestNetworkPlugins/group/false 3.55
266 TestNoKubernetes/serial/StartWithStopK8s 53.18
267 TestNoKubernetes/serial/Start 41.47
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
269 TestNoKubernetes/serial/ProfileList 1.92
270 TestNoKubernetes/serial/Stop 1.57
271 TestNoKubernetes/serial/StartNoArgs 28.83
272 TestStoppedBinaryUpgrade/Setup 2.57
273 TestStoppedBinaryUpgrade/Upgrade 123.55
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
283 TestPause/serial/Start 102.81
284 TestNetworkPlugins/group/auto/Start 103.94
285 TestPause/serial/SecondStartNoReconfiguration 71.62
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
287 TestNetworkPlugins/group/kindnet/Start 91.05
288 TestNetworkPlugins/group/calico/Start 119.99
289 TestNetworkPlugins/group/auto/KubeletFlags 0.25
290 TestNetworkPlugins/group/auto/NetCatPod 10.37
291 TestPause/serial/Pause 0.93
292 TestPause/serial/VerifyStatus 0.33
293 TestPause/serial/Unpause 0.87
294 TestPause/serial/PauseAgain 1.03
295 TestPause/serial/DeletePaused 1.14
296 TestPause/serial/VerifyDeletedResources 4.32
297 TestNetworkPlugins/group/custom-flannel/Start 81.66
298 TestNetworkPlugins/group/auto/DNS 0.16
299 TestNetworkPlugins/group/auto/Localhost 0.14
300 TestNetworkPlugins/group/auto/HairPin 0.13
301 TestNetworkPlugins/group/enable-default-cni/Start 71.38
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
305 TestNetworkPlugins/group/kindnet/DNS 0.2
306 TestNetworkPlugins/group/kindnet/Localhost 0.14
307 TestNetworkPlugins/group/kindnet/HairPin 0.16
308 TestNetworkPlugins/group/flannel/Start 82.62
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.24
311 TestNetworkPlugins/group/calico/NetCatPod 10.28
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
314 TestNetworkPlugins/group/calico/DNS 0.25
315 TestNetworkPlugins/group/calico/Localhost 0.19
316 TestNetworkPlugins/group/calico/HairPin 0.15
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
318 TestNetworkPlugins/group/custom-flannel/DNS 0.92
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.85
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
322 TestNetworkPlugins/group/bridge/Start 64.31
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
327 TestStartStop/group/old-k8s-version/serial/FirstStart 160.43
329 TestStartStop/group/no-preload/serial/FirstStart 121.96
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
332 TestNetworkPlugins/group/flannel/NetCatPod 10.29
333 TestNetworkPlugins/group/flannel/DNS 0.17
334 TestNetworkPlugins/group/flannel/Localhost 0.14
335 TestNetworkPlugins/group/flannel/HairPin 0.13
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
337 TestNetworkPlugins/group/bridge/NetCatPod 12.36
339 TestStartStop/group/embed-certs/serial/FirstStart 77.01
340 TestNetworkPlugins/group/bridge/DNS 0.15
341 TestNetworkPlugins/group/bridge/Localhost 0.12
342 TestNetworkPlugins/group/bridge/HairPin 0.13
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.63
345 TestStartStop/group/no-preload/serial/DeployApp 10.37
346 TestStartStop/group/embed-certs/serial/DeployApp 10.36
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.32
348 TestStartStop/group/no-preload/serial/Stop 91.1
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
350 TestStartStop/group/embed-certs/serial/Stop 90.88
351 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
353 TestStartStop/group/old-k8s-version/serial/Stop 91.08
354 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
356 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.39
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/SecondStart 303.69
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
362 TestStartStop/group/old-k8s-version/serial/SecondStart 185.01
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 322.4
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
368 TestStartStop/group/old-k8s-version/serial/Pause 2.71
370 TestStartStop/group/newest-cni/serial/FirstStart 49.4
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
373 TestStartStop/group/newest-cni/serial/Stop 7.5
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 37.48
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
377 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
378 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
379 TestStartStop/group/no-preload/serial/Pause 3.3
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
383 TestStartStop/group/newest-cni/serial/Pause 2.93
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
x
+
TestDownloadOnly/v1.20.0/json-events (25.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-611402 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-611402 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (25.217984399s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 12:47:52.957093 1006263 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 12:47:52.957235 1006263 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-611402
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-611402: exit status 85 (68.481838ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-611402 | jenkins | v1.35.0 | 20 Jan 25 12:47 UTC |          |
	|         | -p download-only-611402        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:47:27
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:47:27.784035 1006275 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:47:27.784317 1006275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:47:27.784328 1006275 out.go:358] Setting ErrFile to fd 2...
	I0120 12:47:27.784332 1006275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:47:27.784504 1006275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	W0120 12:47:27.784645 1006275 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20242-998973/.minikube/config/config.json: open /home/jenkins/minikube-integration/20242-998973/.minikube/config/config.json: no such file or directory
	I0120 12:47:27.785250 1006275 out.go:352] Setting JSON to true
	I0120 12:47:27.786251 1006275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8990,"bootTime":1737368258,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:47:27.786375 1006275 start.go:139] virtualization: kvm guest
	I0120 12:47:27.789050 1006275 out.go:97] [download-only-611402] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0120 12:47:27.789179 1006275 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 12:47:27.789216 1006275 notify.go:220] Checking for updates...
	I0120 12:47:27.790826 1006275 out.go:169] MINIKUBE_LOCATION=20242
	I0120 12:47:27.792357 1006275 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:47:27.794132 1006275 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 12:47:27.795482 1006275 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 12:47:27.796715 1006275 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 12:47:27.799285 1006275 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 12:47:27.799652 1006275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:47:27.834347 1006275 out.go:97] Using the kvm2 driver based on user configuration
	I0120 12:47:27.834388 1006275 start.go:297] selected driver: kvm2
	I0120 12:47:27.834396 1006275 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:47:27.834755 1006275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:47:27.834870 1006275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-998973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:47:27.851326 1006275 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:47:27.851414 1006275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:47:27.851957 1006275 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 12:47:27.852106 1006275 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 12:47:27.852137 1006275 cni.go:84] Creating CNI manager for ""
	I0120 12:47:27.852192 1006275 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:47:27.852201 1006275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:47:27.852261 1006275 start.go:340] cluster config:
	{Name:download-only-611402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-611402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:47:27.852442 1006275 iso.go:125] acquiring lock: {Name:mk63965bcac7e5d2166c667dd03e4270f636bd53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:47:27.854634 1006275 out.go:97] Downloading VM boot image ...
	I0120 12:47:27.854674 1006275 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20242-998973/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:47:37.944128 1006275 out.go:97] Starting "download-only-611402" primary control-plane node in "download-only-611402" cluster
	I0120 12:47:37.944206 1006275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 12:47:38.046730 1006275 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0120 12:47:38.046769 1006275 cache.go:56] Caching tarball of preloaded images
	I0120 12:47:38.046988 1006275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 12:47:38.049679 1006275 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 12:47:38.049720 1006275 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 12:47:38.155692 1006275 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0120 12:47:51.233837 1006275 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 12:47:51.233942 1006275 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 12:47:52.150252 1006275 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0120 12:47:52.150725 1006275 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/download-only-611402/config.json ...
	I0120 12:47:52.150762 1006275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/download-only-611402/config.json: {Name:mk3e7159edbb4c591f49aceab9a232f8d5a779d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:47:52.150946 1006275 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 12:47:52.151133 1006275 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20242-998973/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-611402 host does not exist
	  To start a cluster, run: "minikube start -p download-only-611402"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-611402
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (12.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-184474 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-184474 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (12.659866654s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (12.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 12:48:05.980646 1006263 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 12:48:05.980705 1006263 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-184474
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-184474: exit status 85 (71.860872ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-611402 | jenkins | v1.35.0 | 20 Jan 25 12:47 UTC |                     |
	|         | -p download-only-611402        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 12:47 UTC | 20 Jan 25 12:47 UTC |
	| delete  | -p download-only-611402        | download-only-611402 | jenkins | v1.35.0 | 20 Jan 25 12:47 UTC | 20 Jan 25 12:47 UTC |
	| start   | -o=json --download-only        | download-only-184474 | jenkins | v1.35.0 | 20 Jan 25 12:47 UTC |                     |
	|         | -p download-only-184474        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:47:53
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:47:53.367724 1006525 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:47:53.367856 1006525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:47:53.367865 1006525 out.go:358] Setting ErrFile to fd 2...
	I0120 12:47:53.367870 1006525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:47:53.368054 1006525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 12:47:53.368682 1006525 out.go:352] Setting JSON to true
	I0120 12:47:53.369857 1006525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9015,"bootTime":1737368258,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:47:53.369979 1006525 start.go:139] virtualization: kvm guest
	I0120 12:47:53.372505 1006525 out.go:97] [download-only-184474] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:47:53.372715 1006525 notify.go:220] Checking for updates...
	I0120 12:47:53.374495 1006525 out.go:169] MINIKUBE_LOCATION=20242
	I0120 12:47:53.376262 1006525 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:47:53.377876 1006525 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 12:47:53.379484 1006525 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 12:47:53.381155 1006525 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 12:47:53.384508 1006525 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 12:47:53.384846 1006525 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:47:53.420894 1006525 out.go:97] Using the kvm2 driver based on user configuration
	I0120 12:47:53.420934 1006525 start.go:297] selected driver: kvm2
	I0120 12:47:53.420942 1006525 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:47:53.421400 1006525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:47:53.421499 1006525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-998973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:47:53.438291 1006525 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:47:53.438361 1006525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:47:53.439002 1006525 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 12:47:53.439160 1006525 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 12:47:53.439194 1006525 cni.go:84] Creating CNI manager for ""
	I0120 12:47:53.439245 1006525 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0120 12:47:53.439254 1006525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:47:53.439309 1006525 start.go:340] cluster config:
	{Name:download-only-184474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-184474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:47:53.439405 1006525 iso.go:125] acquiring lock: {Name:mk63965bcac7e5d2166c667dd03e4270f636bd53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:47:53.441579 1006525 out.go:97] Starting "download-only-184474" primary control-plane node in "download-only-184474" cluster
	I0120 12:47:53.441617 1006525 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:47:53.992423 1006525 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	I0120 12:47:53.992489 1006525 cache.go:56] Caching tarball of preloaded images
	I0120 12:47:53.992685 1006525 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:47:53.994686 1006525 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I0120 12:47:53.994718 1006525 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 12:47:54.103163 1006525 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:bb9e95697e147383ee2f722871c6c317 -> /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-184474 host does not exist
	  To start a cluster, run: "minikube start -p download-only-184474"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-184474
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 12:48:06.638229 1006263 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-686414 --alsologtostderr --binary-mirror http://127.0.0.1:43791 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-686414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-686414
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (64.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-654953 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-654953 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m2.969774986s)
helpers_test.go:175: Cleaning up "offline-containerd-654953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-654953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-654953: (1.076093346s)
--- PASS: TestOffline (64.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-843002
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-843002: exit status 85 (61.81362ms)

                                                
                                                
-- stdout --
	* Profile "addons-843002" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-843002"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-843002
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-843002: exit status 85 (60.36856ms)

                                                
                                                
-- stdout --
	* Profile "addons-843002" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-843002"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (213.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-843002 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-843002 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m33.171385193s)
--- PASS: TestAddons/Setup (213.17s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.87s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 22.758183ms
addons_test.go:823: volcano-controller stabilized in 22.792687ms
addons_test.go:815: volcano-admission stabilized in 22.843122ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-xn9hv" [969ca901-a439-4e0d-b7ef-87b927628e45] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004726716s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-fwmlk" [551bac00-7201-46c3-9191-cb474e3cac56] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005042123s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-mtbs2" [67b6a55e-191a-47fb-88f6-2fdbabf72eb3] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00481046s
addons_test.go:842: (dbg) Run:  kubectl --context addons-843002 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-843002 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-843002 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e619dae3-e7ba-457d-a953-7de70348f143] Pending
helpers_test.go:344: "test-job-nginx-0" [e619dae3-e7ba-457d-a953-7de70348f143] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e619dae3-e7ba-457d-a953-7de70348f143] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004449215s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable volcano --alsologtostderr -v=1: (11.443449471s)
--- PASS: TestAddons/serial/Volcano (42.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-843002 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-843002 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-843002 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-843002 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ad1dc36-b486-4359-8d44-d7226e1cf500] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ad1dc36-b486-4359-8d44-d7226e1cf500] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004629943s
addons_test.go:633: (dbg) Run:  kubectl --context addons-843002 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-843002 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-843002 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.647485ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0120 12:52:42.073904 1006263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 12:52:42.073936 1006263 kapi.go:107] duration metric: took 6.992003ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-6c88467877-mrs7p" [c721a323-6914-42e5-b794-43214b271152] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005539861s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jrxhf" [3990735b-d01b-4c8d-8037-a5ccb03af1a5] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004323991s
addons_test.go:331: (dbg) Run:  kubectl --context addons-843002 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-843002 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-843002 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.710914189s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 ip
2025/01/20 12:52:56 [DEBUG] GET http://192.168.39.232:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-843002 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-843002 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-843002 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [34ca6578-f86d-4739-81b0-054b7c1c0164] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [34ca6578-f86d-4739-81b0-054b7c1c0164] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.009614207s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-843002 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.232
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable ingress-dns --alsologtostderr -v=1: (1.800870018s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable ingress --alsologtostderr -v=1: (7.970708778s)
--- PASS: TestAddons/parallel/Ingress (23.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vmrh5" [98c15033-ab3b-4f39-966b-dacca7d43b98] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004930576s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable inspektor-gadget --alsologtostderr -v=1: (5.90223694s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.324377ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-bbl9k" [08006930-951d-44f5-b0b4-d341bf672e3c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004465478s
addons_test.go:402: (dbg) Run:  kubectl --context addons-843002 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 12:52:42.066986 1006263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.003238ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-843002 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-843002 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a7ea6254-7391-4b31-a9b2-b2e9d3c48191] Pending
helpers_test.go:344: "task-pv-pod" [a7ea6254-7391-4b31-a9b2-b2e9d3c48191] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a7ea6254-7391-4b31-a9b2-b2e9d3c48191] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.007767084s
addons_test.go:511: (dbg) Run:  kubectl --context addons-843002 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-843002 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-843002 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-843002 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-843002 delete pod task-pv-pod: (1.089028478s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-843002 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-843002 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-843002 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
I0120 12:53:21.125979 1006263 kapi.go:150] Service nginx in namespace default found.
helpers_test.go:344: "task-pv-pod-restore" [5c93fec9-4adb-4636-bb46-e671aab066b3] Pending
helpers_test.go:344: "task-pv-pod-restore" [5c93fec9-4adb-4636-bb46-e671aab066b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5c93fec9-4adb-4636-bb46-e671aab066b3] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00455546s
addons_test.go:553: (dbg) Run:  kubectl --context addons-843002 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-843002 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-843002 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.832876825s)
--- PASS: TestAddons/parallel/CSI (55.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-843002 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-n9vt5" [8bc8fb93-8afa-4606-ab74-7f24c513a394] Pending
helpers_test.go:344: "headlamp-69d78d796f-n9vt5" [8bc8fb93-8afa-4606-ab74-7f24c513a394] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-n9vt5" [8bc8fb93-8afa-4606-ab74-7f24c513a394] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.00421841s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable headlamp --alsologtostderr -v=1: (6.057750695s)
--- PASS: TestAddons/parallel/Headlamp (19.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-tvghb" [41223298-0456-4506-b755-d32f53726cd2] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004298365s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.88s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-843002 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-843002 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c724da2b-af80-4dd2-82c7-7560a710fa67] Pending
helpers_test.go:344: "test-local-path" [c724da2b-af80-4dd2-82c7-7560a710fa67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c724da2b-af80-4dd2-82c7-7560a710fa67] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c724da2b-af80-4dd2-82c7-7560a710fa67] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006412795s
addons_test.go:906: (dbg) Run:  kubectl --context addons-843002 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 ssh "cat /opt/local-path-provisioner/pvc-bcaf2a6e-3b3a-4d8c-955a-eb43af33b6e2_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-843002 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-843002 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.399912261s)
--- PASS: TestAddons/parallel/LocalPath (56.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h2klv" [75112e3f-f2eb-43cc-8248-c28c798878ad] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00433085s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-s2zxz" [53969603-6e88-4bf1-9a79-e80a25544cc9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009954913s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-843002 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-843002 addons disable yakd --alsologtostderr -v=1: (5.916713179s)
--- PASS: TestAddons/parallel/Yakd (11.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-843002
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-843002: (1m31.001014552s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-843002
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-843002
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-843002
--- PASS: TestAddons/StoppedEnableDisable (91.32s)

                                                
                                    
x
+
TestCertOptions (68.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-652748 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-652748 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m7.455063851s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-652748 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-652748 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-652748 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-652748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-652748
--- PASS: TestCertOptions (68.79s)

                                                
                                    
x
+
TestCertExpiration (330.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-700273 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-700273 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m12.578030687s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-700273 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-700273 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (1m17.003579604s)
helpers_test.go:175: Cleaning up "cert-expiration-700273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-700273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-700273: (1.229823504s)
--- PASS: TestCertExpiration (330.81s)

                                                
                                    
x
+
TestForceSystemdFlag (72.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-571311 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-571311 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m10.815922884s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-571311 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-571311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-571311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-571311: (1.070921875s)
--- PASS: TestForceSystemdFlag (72.11s)

                                                
                                    
x
+
TestForceSystemdEnv (97.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-694770 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-694770 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m36.708337158s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-694770 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-694770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-694770
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-694770: (1.031096904s)
--- PASS: TestForceSystemdEnv (97.96s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.09s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0120 13:48:05.196461 1006263 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 13:48:05.196715 1006263 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0120 13:48:05.242990 1006263 install.go:62] docker-machine-driver-kvm2: exit status 1
W0120 13:48:05.243620 1006263 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 13:48:05.243693 1006263 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate345135047/001/docker-machine-driver-kvm2
I0120 13:48:05.568387 1006263 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate345135047/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc0005aaf00 gz:0xc0005aaf08 tar:0xc0005aae80 tar.bz2:0xc0005aae90 tar.gz:0xc0005aaec0 tar.xz:0xc0005aaed0 tar.zst:0xc0005aaef0 tbz2:0xc0005aae90 tgz:0xc0005aaec0 txz:0xc0005aaed0 tzst:0xc0005aaef0 xz:0xc0005aaf30 zip:0xc0005aaf60 zst:0xc0005aaf38] Getters:map[file:0xc0007acef0 http:0xc0007b7720 https:0xc0007b7770] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0120 13:48:05.568457 1006263 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate345135047/001/docker-machine-driver-kvm2
I0120 13:48:08.294073 1006263 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 13:48:08.294217 1006263 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 13:48:08.337087 1006263 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0120 13:48:08.337141 1006263 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0120 13:48:08.337235 1006263 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 13:48:08.337279 1006263 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate345135047/002/docker-machine-driver-kvm2
I0120 13:48:08.398868 1006263 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate345135047/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc0005aaf00 gz:0xc0005aaf08 tar:0xc0005aae80 tar.bz2:0xc0005aae90 tar.gz:0xc0005aaec0 tar.xz:0xc0005aaed0 tar.zst:0xc0005aaef0 tbz2:0xc0005aae90 tgz:0xc0005aaec0 txz:0xc0005aaed0 tzst:0xc0005aaef0 xz:0xc0005aaf30 zip:0xc0005aaf60 zst:0xc0005aaf38] Getters:map[file:0xc0019ec1b0 http:0xc000074d70 https:0xc000074dc0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0120 13:48:08.398931 1006263 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate345135047/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.09s)

                                                
                                    
x
+
TestErrorSpam/setup (44.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-954922 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-954922 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-954922 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-954922 --driver=kvm2  --container-runtime=containerd: (44.898358857s)
--- PASS: TestErrorSpam/setup (44.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (7.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 stop: (3.343264831s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 stop: (1.926294414s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-954922 --log_dir /tmp/nospam-954922 stop: (1.885929515s)
--- PASS: TestErrorSpam/stop (7.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/test/nested/copy/1006263/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817722 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0120 12:56:40.525378 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:40.531822 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:40.543249 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:40.564753 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:40.606276 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:40.687826 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:40.849420 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:41.171197 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:41.813360 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:43.094810 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:45.657814 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:50.779580 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:01.021768 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:21.503982 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-817722 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m1.420056358s)
--- PASS: TestFunctional/serial/StartWithProxy (61.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 12:57:29.316988 1006263 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817722 --alsologtostderr -v=8
E0120 12:58:02.465459 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-817722 --alsologtostderr -v=8: (45.146955486s)
functional_test.go:663: soft start took 45.147723672s for "functional-817722" cluster.
I0120 12:58:14.464424 1006263 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (45.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-817722 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 cache add registry.k8s.io/pause:3.1: (1.049176481s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 cache add registry.k8s.io/pause:3.3: (1.115348492s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 cache add registry.k8s.io/pause:latest: (1.016837216s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-817722 /tmp/TestFunctionalserialCacheCmdcacheadd_local1061406095/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cache add minikube-local-cache-test:functional-817722
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 cache add minikube-local-cache-test:functional-817722: (1.782768408s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cache delete minikube-local-cache-test:functional-817722
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-817722
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.122352ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 kubectl -- --context functional-817722 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-817722 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817722 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-817722 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.036100406s)
functional_test.go:761: restart took 45.036237176s for "functional-817722" cluster.
I0120 12:59:07.318235 1006263 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (45.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-817722 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 logs: (1.530094836s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 logs --file /tmp/TestFunctionalserialLogsFileCmd3125431045/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 logs --file /tmp/TestFunctionalserialLogsFileCmd3125431045/001/logs.txt: (1.466376783s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-817722 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-817722
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-817722: exit status 115 (293.448191ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.218:30402 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-817722 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 config get cpus: exit status 14 (70.124884ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 config get cpus: exit status 14 (56.034758ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (27.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-817722 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-817722 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1014594: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (27.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817722 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-817722 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.256627ms)

                                                
                                                
-- stdout --
	* [functional-817722] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:59:28.312015 1014468 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:59:28.312118 1014468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:59:28.312126 1014468 out.go:358] Setting ErrFile to fd 2...
	I0120 12:59:28.312130 1014468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:59:28.312330 1014468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 12:59:28.312903 1014468 out.go:352] Setting JSON to false
	I0120 12:59:28.314181 1014468 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9710,"bootTime":1737368258,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:59:28.314330 1014468 start.go:139] virtualization: kvm guest
	I0120 12:59:28.315887 1014468 out.go:177] * [functional-817722] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:59:28.317625 1014468 notify.go:220] Checking for updates...
	I0120 12:59:28.318396 1014468 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 12:59:28.321641 1014468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:59:28.323211 1014468 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 12:59:28.324598 1014468 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 12:59:28.326159 1014468 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:59:28.327792 1014468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:59:28.329869 1014468 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:59:28.330550 1014468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:59:28.330613 1014468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:59:28.347805 1014468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0120 12:59:28.348413 1014468 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:59:28.349092 1014468 main.go:141] libmachine: Using API Version  1
	I0120 12:59:28.349111 1014468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:59:28.349538 1014468 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:59:28.349787 1014468 main.go:141] libmachine: (functional-817722) Calling .DriverName
	I0120 12:59:28.350042 1014468 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:59:28.350360 1014468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:59:28.350399 1014468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:59:28.367730 1014468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0120 12:59:28.368166 1014468 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:59:28.368697 1014468 main.go:141] libmachine: Using API Version  1
	I0120 12:59:28.368723 1014468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:59:28.369102 1014468 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:59:28.369363 1014468 main.go:141] libmachine: (functional-817722) Calling .DriverName
	I0120 12:59:28.404802 1014468 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 12:59:28.406165 1014468 start.go:297] selected driver: kvm2
	I0120 12:59:28.406181 1014468 start.go:901] validating driver "kvm2" against &{Name:functional-817722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-817722 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:59:28.406287 1014468 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:59:28.408472 1014468 out.go:201] 
	W0120 12:59:28.409952 1014468 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 12:59:28.411352 1014468 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817722 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817722 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-817722 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (167.992266ms)

                                                
                                                
-- stdout --
	* [functional-817722] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:59:28.212860 1014438 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:59:28.213077 1014438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:59:28.213091 1014438 out.go:358] Setting ErrFile to fd 2...
	I0120 12:59:28.213098 1014438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:59:28.213491 1014438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 12:59:28.214310 1014438 out.go:352] Setting JSON to false
	I0120 12:59:28.215791 1014438 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9710,"bootTime":1737368258,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:59:28.215933 1014438 start.go:139] virtualization: kvm guest
	I0120 12:59:28.218402 1014438 out.go:177] * [functional-817722] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0120 12:59:28.220067 1014438 notify.go:220] Checking for updates...
	I0120 12:59:28.220105 1014438 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 12:59:28.221628 1014438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:59:28.223004 1014438 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 12:59:28.224503 1014438 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 12:59:28.225804 1014438 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:59:28.227548 1014438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:59:28.229330 1014438 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:59:28.229807 1014438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:59:28.229867 1014438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:59:28.248618 1014438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0120 12:59:28.249176 1014438 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:59:28.250094 1014438 main.go:141] libmachine: Using API Version  1
	I0120 12:59:28.250128 1014438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:59:28.250605 1014438 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:59:28.250805 1014438 main.go:141] libmachine: (functional-817722) Calling .DriverName
	I0120 12:59:28.251095 1014438 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:59:28.251428 1014438 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 12:59:28.251485 1014438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:59:28.271426 1014438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0120 12:59:28.272060 1014438 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:59:28.272814 1014438 main.go:141] libmachine: Using API Version  1
	I0120 12:59:28.272852 1014438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:59:28.273292 1014438 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:59:28.273569 1014438 main.go:141] libmachine: (functional-817722) Calling .DriverName
	I0120 12:59:28.313122 1014438 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 12:59:28.314581 1014438 start.go:297] selected driver: kvm2
	I0120 12:59:28.314600 1014438 start.go:901] validating driver "kvm2" against &{Name:functional-817722 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-817722 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.218 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:59:28.314776 1014438 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:59:28.316714 1014438 out.go:201] 
	W0120 12:59:28.318472 1014438 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 12:59:28.319999 1014438 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-817722 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-817722 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-9s9s2" [8fa09884-eb39-41aa-80e1-e59869eb6d78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-9s9s2" [8fa09884-eb39-41aa-80e1-e59869eb6d78] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005522688s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.218:32745
functional_test.go:1675: http://192.168.39.218:32745: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-9s9s2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.218:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.218:32745
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d1e9cb3c-4a15-476e-b846-d7f86a1ef5c8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004416702s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-817722 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-817722 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-817722 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-817722 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e6c8fcde-f807-4a7f-b594-32036b1920c0] Pending
helpers_test.go:344: "sp-pod" [e6c8fcde-f807-4a7f-b594-32036b1920c0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e6c8fcde-f807-4a7f-b594-32036b1920c0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007965802s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-817722 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-817722 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-817722 delete -f testdata/storage-provisioner/pod.yaml: (1.545071496s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-817722 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e84b44ab-a099-4441-9aa8-e1902440570a] Pending
helpers_test.go:344: "sp-pod" [e84b44ab-a099-4441-9aa8-e1902440570a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e84b44ab-a099-4441-9aa8-e1902440570a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004475788s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-817722 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh -n functional-817722 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cp functional-817722:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1931276568/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh -n functional-817722 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh -n functional-817722 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-817722 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-64gx7" [ee2b4508-5971-47ec-b4ac-275818507757] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-64gx7" [ee2b4508-5971-47ec-b4ac-275818507757] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.010192746s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;": exit status 1 (145.464154ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 12:59:48.948299 1006263 retry.go:31] will retry after 738.610671ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;": exit status 1 (177.630176ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 12:59:49.865601 1006263 retry.go:31] will retry after 831.954353ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;": exit status 1 (179.983407ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 12:59:50.878842 1006263 retry.go:31] will retry after 1.646341513s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;": exit status 1 (349.292013ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 12:59:52.875667 1006263 retry.go:31] will retry after 1.914124061s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;": exit status 1 (125.401539ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 12:59:54.916296 1006263 retry.go:31] will retry after 3.35027915s: exit status 1
2025/01/20 12:59:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1807: (dbg) Run:  kubectl --context functional-817722 exec mysql-58ccfd96bb-64gx7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1006263/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /etc/test/nested/copy/1006263/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1006263.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /etc/ssl/certs/1006263.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1006263.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /usr/share/ca-certificates/1006263.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10062632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /etc/ssl/certs/10062632.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10062632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /usr/share/ca-certificates/10062632.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-817722 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh "sudo systemctl is-active docker": exit status 1 (230.500899ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh "sudo systemctl is-active crio": exit status 1 (235.586867ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817722 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-817722
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-817722
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817722 image ls --format short --alsologtostderr:
I0120 12:59:39.719868 1015157 out.go:345] Setting OutFile to fd 1 ...
I0120 12:59:39.720023 1015157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:39.720035 1015157 out.go:358] Setting ErrFile to fd 2...
I0120 12:59:39.720039 1015157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:39.720214 1015157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 12:59:39.720884 1015157 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:39.721025 1015157 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:39.721438 1015157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:39.721503 1015157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:39.737402 1015157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
I0120 12:59:39.737942 1015157 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:39.738644 1015157 main.go:141] libmachine: Using API Version  1
I0120 12:59:39.738672 1015157 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:39.739055 1015157 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:39.739327 1015157 main.go:141] libmachine: (functional-817722) Calling .GetState
I0120 12:59:39.741288 1015157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:39.741330 1015157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:39.756913 1015157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34511
I0120 12:59:39.757442 1015157 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:39.758034 1015157 main.go:141] libmachine: Using API Version  1
I0120 12:59:39.758066 1015157 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:39.758390 1015157 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:39.758628 1015157 main.go:141] libmachine: (functional-817722) Calling .DriverName
I0120 12:59:39.758911 1015157 ssh_runner.go:195] Run: systemctl --version
I0120 12:59:39.758951 1015157 main.go:141] libmachine: (functional-817722) Calling .GetSSHHostname
I0120 12:59:39.761805 1015157 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:39.762262 1015157 main.go:141] libmachine: (functional-817722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:d3:6c", ip: ""} in network mk-functional-817722: {Iface:virbr1 ExpiryTime:2025-01-20 13:56:43 +0000 UTC Type:0 Mac:52:54:00:a5:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-817722 Clientid:01:52:54:00:a5:d3:6c}
I0120 12:59:39.762303 1015157 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined IP address 192.168.39.218 and MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:39.762353 1015157 main.go:141] libmachine: (functional-817722) Calling .GetSSHPort
I0120 12:59:39.762563 1015157 main.go:141] libmachine: (functional-817722) Calling .GetSSHKeyPath
I0120 12:59:39.762718 1015157 main.go:141] libmachine: (functional-817722) Calling .GetSSHUsername
I0120 12:59:39.762863 1015157 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/functional-817722/id_rsa Username:docker}
I0120 12:59:39.841771 1015157 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:59:39.895402 1015157 main.go:141] libmachine: Making call to close driver server
I0120 12:59:39.895428 1015157 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:39.895734 1015157 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:39.895761 1015157 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:39.895770 1015157 main.go:141] libmachine: Making call to close driver server
I0120 12:59:39.895779 1015157 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:39.895779 1015157 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
I0120 12:59:39.896036 1015157 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:39.896079 1015157 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:39.896102 1015157 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817722 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.32.0            | sha256:8cab3d | 26.3MB |
| registry.k8s.io/kube-proxy                  | v1.32.0            | sha256:040f9f | 30.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.32.0            | sha256:a389e1 | 20.7MB |
| docker.io/kicbase/echo-server               | functional-817722  | sha256:9056ab | 2.37MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-apiserver              | v1.32.0            | sha256:c2e17b | 28.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/minikube-local-cache-test | functional-817722  | sha256:4435b1 | 990B   |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817722 image ls --format table --alsologtostderr:
I0120 12:59:40.402066 1015205 out.go:345] Setting OutFile to fd 1 ...
I0120 12:59:40.402211 1015205 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:40.402230 1015205 out.go:358] Setting ErrFile to fd 2...
I0120 12:59:40.402237 1015205 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:40.402484 1015205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 12:59:40.403104 1015205 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:40.403211 1015205 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:40.403578 1015205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:40.403622 1015205 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:40.419999 1015205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
I0120 12:59:40.420672 1015205 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:40.421357 1015205 main.go:141] libmachine: Using API Version  1
I0120 12:59:40.421381 1015205 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:40.421757 1015205 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:40.421964 1015205 main.go:141] libmachine: (functional-817722) Calling .GetState
I0120 12:59:40.424057 1015205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:40.424110 1015205 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:40.440283 1015205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35125
I0120 12:59:40.440766 1015205 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:40.441373 1015205 main.go:141] libmachine: Using API Version  1
I0120 12:59:40.441408 1015205 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:40.441744 1015205 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:40.441953 1015205 main.go:141] libmachine: (functional-817722) Calling .DriverName
I0120 12:59:40.442168 1015205 ssh_runner.go:195] Run: systemctl --version
I0120 12:59:40.442194 1015205 main.go:141] libmachine: (functional-817722) Calling .GetSSHHostname
I0120 12:59:40.445165 1015205 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:40.445564 1015205 main.go:141] libmachine: (functional-817722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:d3:6c", ip: ""} in network mk-functional-817722: {Iface:virbr1 ExpiryTime:2025-01-20 13:56:43 +0000 UTC Type:0 Mac:52:54:00:a5:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-817722 Clientid:01:52:54:00:a5:d3:6c}
I0120 12:59:40.445591 1015205 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined IP address 192.168.39.218 and MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:40.445803 1015205 main.go:141] libmachine: (functional-817722) Calling .GetSSHPort
I0120 12:59:40.446056 1015205 main.go:141] libmachine: (functional-817722) Calling .GetSSHKeyPath
I0120 12:59:40.446228 1015205 main.go:141] libmachine: (functional-817722) Calling .GetSSHUsername
I0120 12:59:40.446411 1015205 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/functional-817722/id_rsa Username:docker}
I0120 12:59:40.529380 1015205 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:59:40.584746 1015205 main.go:141] libmachine: Making call to close driver server
I0120 12:59:40.584765 1015205 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:40.585136 1015205 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:40.585165 1015205 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:40.585175 1015205 main.go:141] libmachine: Making call to close driver server
I0120 12:59:40.585184 1015205 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:40.585489 1015205 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:40.585505 1015205 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
I0120 12:59:40.585515 1015205 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817722 image ls --format json --alsologtostderr:
[{"id":"sha256:4435b1c2387415f59a838e960bd071fb9f54b4ddc995cd545645c85898bfb4a7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-817722"],"size":"990"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"30906462"},{"id":"sha256:a389e107f4ff1130c69849f0af08cb
ce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"20656471"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-817722"],"size":"2372971"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2
a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"26254834"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["re
gistry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"28670542"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817722 image ls --format json --alsologtostderr:
I0120 12:59:39.955936 1015181 out.go:345] Setting OutFile to fd 1 ...
I0120 12:59:39.956056 1015181 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:39.956061 1015181 out.go:358] Setting ErrFile to fd 2...
I0120 12:59:39.956066 1015181 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:39.956290 1015181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 12:59:39.957041 1015181 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:39.957161 1015181 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:39.957538 1015181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:39.957603 1015181 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:39.973767 1015181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
I0120 12:59:39.974372 1015181 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:39.975027 1015181 main.go:141] libmachine: Using API Version  1
I0120 12:59:39.975051 1015181 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:39.975452 1015181 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:39.975688 1015181 main.go:141] libmachine: (functional-817722) Calling .GetState
I0120 12:59:39.977979 1015181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:39.978039 1015181 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:39.994988 1015181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43757
I0120 12:59:39.995678 1015181 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:39.996327 1015181 main.go:141] libmachine: Using API Version  1
I0120 12:59:39.996377 1015181 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:39.996836 1015181 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:39.997076 1015181 main.go:141] libmachine: (functional-817722) Calling .DriverName
I0120 12:59:39.997386 1015181 ssh_runner.go:195] Run: systemctl --version
I0120 12:59:39.997424 1015181 main.go:141] libmachine: (functional-817722) Calling .GetSSHHostname
I0120 12:59:40.000359 1015181 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:40.000979 1015181 main.go:141] libmachine: (functional-817722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:d3:6c", ip: ""} in network mk-functional-817722: {Iface:virbr1 ExpiryTime:2025-01-20 13:56:43 +0000 UTC Type:0 Mac:52:54:00:a5:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-817722 Clientid:01:52:54:00:a5:d3:6c}
I0120 12:59:40.001023 1015181 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined IP address 192.168.39.218 and MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:40.001290 1015181 main.go:141] libmachine: (functional-817722) Calling .GetSSHPort
I0120 12:59:40.001546 1015181 main.go:141] libmachine: (functional-817722) Calling .GetSSHKeyPath
I0120 12:59:40.001726 1015181 main.go:141] libmachine: (functional-817722) Calling .GetSSHUsername
I0120 12:59:40.001944 1015181 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/functional-817722/id_rsa Username:docker}
I0120 12:59:40.088242 1015181 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:59:40.140477 1015181 main.go:141] libmachine: Making call to close driver server
I0120 12:59:40.140499 1015181 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:40.140878 1015181 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
I0120 12:59:40.140941 1015181 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:40.140956 1015181 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:40.141001 1015181 main.go:141] libmachine: Making call to close driver server
I0120 12:59:40.141013 1015181 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:40.141301 1015181 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:40.141319 1015181 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817722 image ls --format yaml --alsologtostderr:
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "30906462"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-817722
size: "2372971"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "26254834"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:4435b1c2387415f59a838e960bd071fb9f54b4ddc995cd545645c85898bfb4a7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-817722
size: "990"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "28670542"
- id: sha256:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "20656471"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817722 image ls --format yaml --alsologtostderr:
I0120 12:59:40.638758 1015229 out.go:345] Setting OutFile to fd 1 ...
I0120 12:59:40.639051 1015229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:40.639063 1015229 out.go:358] Setting ErrFile to fd 2...
I0120 12:59:40.639067 1015229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:40.639285 1015229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 12:59:40.639935 1015229 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:40.640054 1015229 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:40.640440 1015229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:40.640525 1015229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:40.656064 1015229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
I0120 12:59:40.656641 1015229 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:40.657417 1015229 main.go:141] libmachine: Using API Version  1
I0120 12:59:40.657450 1015229 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:40.657864 1015229 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:40.659618 1015229 main.go:141] libmachine: (functional-817722) Calling .GetState
I0120 12:59:40.661882 1015229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:40.661944 1015229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:40.677291 1015229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
I0120 12:59:40.677826 1015229 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:40.678388 1015229 main.go:141] libmachine: Using API Version  1
I0120 12:59:40.678409 1015229 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:40.678719 1015229 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:40.678930 1015229 main.go:141] libmachine: (functional-817722) Calling .DriverName
I0120 12:59:40.679146 1015229 ssh_runner.go:195] Run: systemctl --version
I0120 12:59:40.679172 1015229 main.go:141] libmachine: (functional-817722) Calling .GetSSHHostname
I0120 12:59:40.682005 1015229 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:40.682405 1015229 main.go:141] libmachine: (functional-817722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:d3:6c", ip: ""} in network mk-functional-817722: {Iface:virbr1 ExpiryTime:2025-01-20 13:56:43 +0000 UTC Type:0 Mac:52:54:00:a5:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-817722 Clientid:01:52:54:00:a5:d3:6c}
I0120 12:59:40.682442 1015229 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined IP address 192.168.39.218 and MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:40.682550 1015229 main.go:141] libmachine: (functional-817722) Calling .GetSSHPort
I0120 12:59:40.682747 1015229 main.go:141] libmachine: (functional-817722) Calling .GetSSHKeyPath
I0120 12:59:40.682938 1015229 main.go:141] libmachine: (functional-817722) Calling .GetSSHUsername
I0120 12:59:40.683113 1015229 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/functional-817722/id_rsa Username:docker}
I0120 12:59:40.769376 1015229 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 12:59:40.848785 1015229 main.go:141] libmachine: Making call to close driver server
I0120 12:59:40.848800 1015229 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:40.849179 1015229 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:40.849205 1015229 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:40.849216 1015229 main.go:141] libmachine: Making call to close driver server
I0120 12:59:40.849226 1015229 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:40.849494 1015229 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:40.849516 1015229 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:40.849550 1015229 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh pgrep buildkitd: exit status 1 (218.71282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image build -t localhost/my-image:functional-817722 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 image build -t localhost/my-image:functional-817722 testdata/build --alsologtostderr: (4.711894158s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817722 image build -t localhost/my-image:functional-817722 testdata/build --alsologtostderr:
I0120 12:59:41.127748 1015283 out.go:345] Setting OutFile to fd 1 ...
I0120 12:59:41.127887 1015283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:41.127902 1015283 out.go:358] Setting ErrFile to fd 2...
I0120 12:59:41.127907 1015283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 12:59:41.128082 1015283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 12:59:41.128720 1015283 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:41.129401 1015283 config.go:182] Loaded profile config "functional-817722": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 12:59:41.129802 1015283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:41.129852 1015283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:41.146104 1015283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
I0120 12:59:41.146685 1015283 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:41.147382 1015283 main.go:141] libmachine: Using API Version  1
I0120 12:59:41.147418 1015283 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:41.147950 1015283 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:41.148419 1015283 main.go:141] libmachine: (functional-817722) Calling .GetState
I0120 12:59:41.150486 1015283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 12:59:41.150549 1015283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 12:59:41.166698 1015283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
I0120 12:59:41.167226 1015283 main.go:141] libmachine: () Calling .GetVersion
I0120 12:59:41.167739 1015283 main.go:141] libmachine: Using API Version  1
I0120 12:59:41.167765 1015283 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 12:59:41.168149 1015283 main.go:141] libmachine: () Calling .GetMachineName
I0120 12:59:41.168457 1015283 main.go:141] libmachine: (functional-817722) Calling .DriverName
I0120 12:59:41.168713 1015283 ssh_runner.go:195] Run: systemctl --version
I0120 12:59:41.168765 1015283 main.go:141] libmachine: (functional-817722) Calling .GetSSHHostname
I0120 12:59:41.172218 1015283 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:41.172641 1015283 main.go:141] libmachine: (functional-817722) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:d3:6c", ip: ""} in network mk-functional-817722: {Iface:virbr1 ExpiryTime:2025-01-20 13:56:43 +0000 UTC Type:0 Mac:52:54:00:a5:d3:6c Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:functional-817722 Clientid:01:52:54:00:a5:d3:6c}
I0120 12:59:41.172684 1015283 main.go:141] libmachine: (functional-817722) DBG | domain functional-817722 has defined IP address 192.168.39.218 and MAC address 52:54:00:a5:d3:6c in network mk-functional-817722
I0120 12:59:41.172882 1015283 main.go:141] libmachine: (functional-817722) Calling .GetSSHPort
I0120 12:59:41.173108 1015283 main.go:141] libmachine: (functional-817722) Calling .GetSSHKeyPath
I0120 12:59:41.173366 1015283 main.go:141] libmachine: (functional-817722) Calling .GetSSHUsername
I0120 12:59:41.173556 1015283 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/functional-817722/id_rsa Username:docker}
I0120 12:59:41.257388 1015283 build_images.go:161] Building image from path: /tmp/build.4245716190.tar
I0120 12:59:41.257474 1015283 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 12:59:41.270478 1015283 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4245716190.tar
I0120 12:59:41.277398 1015283 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4245716190.tar: stat -c "%s %y" /var/lib/minikube/build/build.4245716190.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4245716190.tar': No such file or directory
I0120 12:59:41.277442 1015283 ssh_runner.go:362] scp /tmp/build.4245716190.tar --> /var/lib/minikube/build/build.4245716190.tar (3072 bytes)
I0120 12:59:41.318778 1015283 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4245716190
I0120 12:59:41.330501 1015283 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4245716190 -xf /var/lib/minikube/build/build.4245716190.tar
I0120 12:59:41.342834 1015283 containerd.go:394] Building image: /var/lib/minikube/build/build.4245716190
I0120 12:59:41.342925 1015283 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4245716190 --local dockerfile=/var/lib/minikube/build/build.4245716190 --output type=image,name=localhost/my-image:functional-817722
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.4s done
#8 exporting manifest sha256:e4dea9e97f896bf4764883bf0bc9a0ed2fe7727abf7bac9b7dbc7e8444f4cf3c 0.0s done
#8 exporting config sha256:8408e4c729ba4c7e773bb200e4ee17bf8773d1d8cc333046a548b6327a1a5041 0.0s done
#8 naming to localhost/my-image:functional-817722 0.0s done
#8 DONE 0.5s
I0120 12:59:45.749520 1015283 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4245716190 --local dockerfile=/var/lib/minikube/build/build.4245716190 --output type=image,name=localhost/my-image:functional-817722: (4.406558328s)
I0120 12:59:45.749608 1015283 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4245716190
I0120 12:59:45.764686 1015283 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4245716190.tar
I0120 12:59:45.779401 1015283 build_images.go:217] Built localhost/my-image:functional-817722 from /tmp/build.4245716190.tar
I0120 12:59:45.779455 1015283 build_images.go:133] succeeded building to: functional-817722
I0120 12:59:45.779462 1015283 build_images.go:134] failed building to: 
I0120 12:59:45.779499 1015283 main.go:141] libmachine: Making call to close driver server
I0120 12:59:45.779519 1015283 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:45.779841 1015283 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:45.779860 1015283 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 12:59:45.779889 1015283 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
I0120 12:59:45.779924 1015283 main.go:141] libmachine: Making call to close driver server
I0120 12:59:45.779941 1015283 main.go:141] libmachine: (functional-817722) Calling .Close
I0120 12:59:45.780285 1015283 main.go:141] libmachine: Successfully made call to close driver server
I0120 12:59:45.780301 1015283 main.go:141] libmachine: (functional-817722) DBG | Closing plugin on server side
I0120 12:59:45.780305 1015283 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.828139431s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-817722
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-817722 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-817722 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-kgq4n" [14ded540-151f-4d70-a29f-6f66ede1357d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-kgq4n" [14ded540-151f-4d70-a29f-6f66ede1357d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00523927s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image load --daemon kicbase/echo-server:functional-817722 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 image load --daemon kicbase/echo-server:functional-817722 --alsologtostderr: (1.299461761s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image load --daemon kicbase/echo-server:functional-817722 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-817722 image load --daemon kicbase/echo-server:functional-817722 --alsologtostderr: (1.145831851s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-817722
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image load --daemon kicbase/echo-server:functional-817722 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image save kicbase/echo-server:functional-817722 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image rm kicbase/echo-server:functional-817722 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-817722
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 image save --daemon kicbase/echo-server:functional-817722 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-817722
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
E0120 12:59:24.387392 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1315: Took "288.71443ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "60.684104ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "349.769047ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "76.155231ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdany-port3630603524/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737377964920068053" to /tmp/TestFunctionalparallelMountCmdany-port3630603524/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737377964920068053" to /tmp/TestFunctionalparallelMountCmdany-port3630603524/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737377964920068053" to /tmp/TestFunctionalparallelMountCmdany-port3630603524/001/test-1737377964920068053
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.507566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 12:59:25.174910 1006263 retry.go:31] will retry after 620.193421ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 12:59 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 12:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 12:59 test-1737377964920068053
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh cat /mount-9p/test-1737377964920068053
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-817722 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [177946cb-d2a9-47e8-a798-843b56103ad4] Pending
helpers_test.go:344: "busybox-mount" [177946cb-d2a9-47e8-a798-843b56103ad4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [177946cb-d2a9-47e8-a798-843b56103ad4] Running
helpers_test.go:344: "busybox-mount" [177946cb-d2a9-47e8-a798-843b56103ad4] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [177946cb-d2a9-47e8-a798-843b56103ad4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00353189s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-817722 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdany-port3630603524/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 service list -o json
functional_test.go:1494: Took "308.82772ms" to run "out/minikube-linux-amd64 -p functional-817722 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.218:31578
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.218:31578
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdspecific-port1111440715/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (223.013601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 12:59:35.995316 1006263 retry.go:31] will retry after 367.284724ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdspecific-port1111440715/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh "sudo umount -f /mount-9p": exit status 1 (215.060078ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-817722 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdspecific-port1111440715/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3583674308/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3583674308/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3583674308/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T" /mount1: exit status 1 (282.280883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 12:59:37.723815 1006263 retry.go:31] will retry after 635.057815ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817722 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-817722 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3583674308/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3583674308/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817722 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3583674308/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-817722
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-817722
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-817722
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-285060 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 13:01:40.520637 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:02:08.229662 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-285060 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m15.054723605s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-285060 -- rollout status deployment/busybox: (4.110040937s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-2q57f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-nm2ww -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-stbjb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-2q57f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-nm2ww -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-stbjb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-2q57f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-nm2ww -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-stbjb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-2q57f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-2q57f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-nm2ww -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-nm2ww -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-stbjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285060 -- exec busybox-58667487b6-stbjb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-285060 -v=7 --alsologtostderr
E0120 13:04:15.260800 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.267324 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.278796 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.300276 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.341901 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.423484 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.585643 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:15.907340 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:16.549166 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:17.830956 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:20.392647 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-285060 -v=7 --alsologtostderr: (57.268058406s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-285060 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp testdata/cp-test.txt ha-285060:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile676540998/001/cp-test_ha-285060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060:/home/docker/cp-test.txt ha-285060-m02:/home/docker/cp-test_ha-285060_ha-285060-m02.txt
E0120 13:04:25.514793 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test_ha-285060_ha-285060-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060:/home/docker/cp-test.txt ha-285060-m03:/home/docker/cp-test_ha-285060_ha-285060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test_ha-285060_ha-285060-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060:/home/docker/cp-test.txt ha-285060-m04:/home/docker/cp-test_ha-285060_ha-285060-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test_ha-285060_ha-285060-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp testdata/cp-test.txt ha-285060-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile676540998/001/cp-test_ha-285060-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m02:/home/docker/cp-test.txt ha-285060:/home/docker/cp-test_ha-285060-m02_ha-285060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test_ha-285060-m02_ha-285060.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m02:/home/docker/cp-test.txt ha-285060-m03:/home/docker/cp-test_ha-285060-m02_ha-285060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test_ha-285060-m02_ha-285060-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m02:/home/docker/cp-test.txt ha-285060-m04:/home/docker/cp-test_ha-285060-m02_ha-285060-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test_ha-285060-m02_ha-285060-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp testdata/cp-test.txt ha-285060-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile676540998/001/cp-test_ha-285060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m03:/home/docker/cp-test.txt ha-285060:/home/docker/cp-test_ha-285060-m03_ha-285060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test_ha-285060-m03_ha-285060.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m03:/home/docker/cp-test.txt ha-285060-m02:/home/docker/cp-test_ha-285060-m03_ha-285060-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test_ha-285060-m03_ha-285060-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m03:/home/docker/cp-test.txt ha-285060-m04:/home/docker/cp-test_ha-285060-m03_ha-285060-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test_ha-285060-m03_ha-285060-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp testdata/cp-test.txt ha-285060-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile676540998/001/cp-test_ha-285060-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m04:/home/docker/cp-test.txt ha-285060:/home/docker/cp-test_ha-285060-m04_ha-285060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060 "sudo cat /home/docker/cp-test_ha-285060-m04_ha-285060.txt"
E0120 13:04:35.756172 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m04:/home/docker/cp-test.txt ha-285060-m02:/home/docker/cp-test_ha-285060-m04_ha-285060-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m02 "sudo cat /home/docker/cp-test_ha-285060-m04_ha-285060-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 cp ha-285060-m04:/home/docker/cp-test.txt ha-285060-m03:/home/docker/cp-test_ha-285060-m04_ha-285060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 ssh -n ha-285060-m03 "sudo cat /home/docker/cp-test_ha-285060-m04_ha-285060-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 node stop m02 -v=7 --alsologtostderr
E0120 13:04:56.237732 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:05:37.199473 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-285060 node stop m02 -v=7 --alsologtostderr: (1m31.023351915s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr: exit status 7 (699.808267ms)

                                                
                                                
-- stdout --
	ha-285060
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285060-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-285060-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285060-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:06:08.557155 1020127 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:06:08.557298 1020127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:06:08.557311 1020127 out.go:358] Setting ErrFile to fd 2...
	I0120 13:06:08.557317 1020127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:06:08.557533 1020127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 13:06:08.557720 1020127 out.go:352] Setting JSON to false
	I0120 13:06:08.557755 1020127 mustload.go:65] Loading cluster: ha-285060
	I0120 13:06:08.557904 1020127 notify.go:220] Checking for updates...
	I0120 13:06:08.558418 1020127 config.go:182] Loaded profile config "ha-285060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:06:08.558462 1020127 status.go:174] checking status of ha-285060 ...
	I0120 13:06:08.559023 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.559069 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.588910 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0120 13:06:08.589528 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.590211 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.590242 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.590628 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.590890 1020127 main.go:141] libmachine: (ha-285060) Calling .GetState
	I0120 13:06:08.592714 1020127 status.go:371] ha-285060 host status = "Running" (err=<nil>)
	I0120 13:06:08.592731 1020127 host.go:66] Checking if "ha-285060" exists ...
	I0120 13:06:08.593074 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.593112 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.608353 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33251
	I0120 13:06:08.608814 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.609343 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.609371 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.609713 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.609946 1020127 main.go:141] libmachine: (ha-285060) Calling .GetIP
	I0120 13:06:08.612860 1020127 main.go:141] libmachine: (ha-285060) DBG | domain ha-285060 has defined MAC address 52:54:00:63:83:7a in network mk-ha-285060
	I0120 13:06:08.613324 1020127 main.go:141] libmachine: (ha-285060) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:83:7a", ip: ""} in network mk-ha-285060: {Iface:virbr1 ExpiryTime:2025-01-20 14:00:16 +0000 UTC Type:0 Mac:52:54:00:63:83:7a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-285060 Clientid:01:52:54:00:63:83:7a}
	I0120 13:06:08.613352 1020127 main.go:141] libmachine: (ha-285060) DBG | domain ha-285060 has defined IP address 192.168.39.86 and MAC address 52:54:00:63:83:7a in network mk-ha-285060
	I0120 13:06:08.613489 1020127 host.go:66] Checking if "ha-285060" exists ...
	I0120 13:06:08.613871 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.613910 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.630474 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0120 13:06:08.631046 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.631597 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.631621 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.631961 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.632173 1020127 main.go:141] libmachine: (ha-285060) Calling .DriverName
	I0120 13:06:08.632410 1020127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:06:08.632438 1020127 main.go:141] libmachine: (ha-285060) Calling .GetSSHHostname
	I0120 13:06:08.635342 1020127 main.go:141] libmachine: (ha-285060) DBG | domain ha-285060 has defined MAC address 52:54:00:63:83:7a in network mk-ha-285060
	I0120 13:06:08.635805 1020127 main.go:141] libmachine: (ha-285060) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:83:7a", ip: ""} in network mk-ha-285060: {Iface:virbr1 ExpiryTime:2025-01-20 14:00:16 +0000 UTC Type:0 Mac:52:54:00:63:83:7a Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-285060 Clientid:01:52:54:00:63:83:7a}
	I0120 13:06:08.635837 1020127 main.go:141] libmachine: (ha-285060) DBG | domain ha-285060 has defined IP address 192.168.39.86 and MAC address 52:54:00:63:83:7a in network mk-ha-285060
	I0120 13:06:08.635996 1020127 main.go:141] libmachine: (ha-285060) Calling .GetSSHPort
	I0120 13:06:08.636182 1020127 main.go:141] libmachine: (ha-285060) Calling .GetSSHKeyPath
	I0120 13:06:08.636320 1020127 main.go:141] libmachine: (ha-285060) Calling .GetSSHUsername
	I0120 13:06:08.636467 1020127 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/ha-285060/id_rsa Username:docker}
	I0120 13:06:08.724777 1020127 ssh_runner.go:195] Run: systemctl --version
	I0120 13:06:08.736878 1020127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:06:08.757583 1020127 kubeconfig.go:125] found "ha-285060" server: "https://192.168.39.254:8443"
	I0120 13:06:08.757632 1020127 api_server.go:166] Checking apiserver status ...
	I0120 13:06:08.757676 1020127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:06:08.774290 1020127 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup
	W0120 13:06:08.785183 1020127 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:06:08.785295 1020127 ssh_runner.go:195] Run: ls
	I0120 13:06:08.790934 1020127 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 13:06:08.795916 1020127 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 13:06:08.795943 1020127 status.go:463] ha-285060 apiserver status = Running (err=<nil>)
	I0120 13:06:08.795964 1020127 status.go:176] ha-285060 status: &{Name:ha-285060 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:06:08.795982 1020127 status.go:174] checking status of ha-285060-m02 ...
	I0120 13:06:08.796292 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.796329 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.811848 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I0120 13:06:08.812459 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.813003 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.813029 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.813435 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.813692 1020127 main.go:141] libmachine: (ha-285060-m02) Calling .GetState
	I0120 13:06:08.815387 1020127 status.go:371] ha-285060-m02 host status = "Stopped" (err=<nil>)
	I0120 13:06:08.815402 1020127 status.go:384] host is not running, skipping remaining checks
	I0120 13:06:08.815408 1020127 status.go:176] ha-285060-m02 status: &{Name:ha-285060-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:06:08.815428 1020127 status.go:174] checking status of ha-285060-m03 ...
	I0120 13:06:08.815727 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.815787 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.831940 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0120 13:06:08.832546 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.833068 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.833090 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.833486 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.833665 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .GetState
	I0120 13:06:08.835258 1020127 status.go:371] ha-285060-m03 host status = "Running" (err=<nil>)
	I0120 13:06:08.835283 1020127 host.go:66] Checking if "ha-285060-m03" exists ...
	I0120 13:06:08.835586 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.835622 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.851376 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0120 13:06:08.852018 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.852592 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.852622 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.853074 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.853335 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .GetIP
	I0120 13:06:08.856876 1020127 main.go:141] libmachine: (ha-285060-m03) DBG | domain ha-285060-m03 has defined MAC address 52:54:00:af:fe:04 in network mk-ha-285060
	I0120 13:06:08.857573 1020127 main.go:141] libmachine: (ha-285060-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:fe:04", ip: ""} in network mk-ha-285060: {Iface:virbr1 ExpiryTime:2025-01-20 14:02:18 +0000 UTC Type:0 Mac:52:54:00:af:fe:04 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-285060-m03 Clientid:01:52:54:00:af:fe:04}
	I0120 13:06:08.857599 1020127 main.go:141] libmachine: (ha-285060-m03) DBG | domain ha-285060-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:af:fe:04 in network mk-ha-285060
	I0120 13:06:08.857791 1020127 host.go:66] Checking if "ha-285060-m03" exists ...
	I0120 13:06:08.858155 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:08.858198 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:08.873845 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0120 13:06:08.874342 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:08.874873 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:08.874897 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:08.875242 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:08.875466 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .DriverName
	I0120 13:06:08.875653 1020127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:06:08.875694 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .GetSSHHostname
	I0120 13:06:08.878814 1020127 main.go:141] libmachine: (ha-285060-m03) DBG | domain ha-285060-m03 has defined MAC address 52:54:00:af:fe:04 in network mk-ha-285060
	I0120 13:06:08.879319 1020127 main.go:141] libmachine: (ha-285060-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:fe:04", ip: ""} in network mk-ha-285060: {Iface:virbr1 ExpiryTime:2025-01-20 14:02:18 +0000 UTC Type:0 Mac:52:54:00:af:fe:04 Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:ha-285060-m03 Clientid:01:52:54:00:af:fe:04}
	I0120 13:06:08.879348 1020127 main.go:141] libmachine: (ha-285060-m03) DBG | domain ha-285060-m03 has defined IP address 192.168.39.109 and MAC address 52:54:00:af:fe:04 in network mk-ha-285060
	I0120 13:06:08.879493 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .GetSSHPort
	I0120 13:06:08.879704 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .GetSSHKeyPath
	I0120 13:06:08.879842 1020127 main.go:141] libmachine: (ha-285060-m03) Calling .GetSSHUsername
	I0120 13:06:08.879972 1020127 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/ha-285060-m03/id_rsa Username:docker}
	I0120 13:06:08.966961 1020127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:06:08.992383 1020127 kubeconfig.go:125] found "ha-285060" server: "https://192.168.39.254:8443"
	I0120 13:06:08.992413 1020127 api_server.go:166] Checking apiserver status ...
	I0120 13:06:08.992461 1020127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:06:09.008956 1020127 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1204/cgroup
	W0120 13:06:09.020347 1020127 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1204/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:06:09.020409 1020127 ssh_runner.go:195] Run: ls
	I0120 13:06:09.025546 1020127 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 13:06:09.030465 1020127 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 13:06:09.030493 1020127 status.go:463] ha-285060-m03 apiserver status = Running (err=<nil>)
	I0120 13:06:09.030504 1020127 status.go:176] ha-285060-m03 status: &{Name:ha-285060-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:06:09.030527 1020127 status.go:174] checking status of ha-285060-m04 ...
	I0120 13:06:09.030957 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:09.031002 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:09.046828 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45911
	I0120 13:06:09.047315 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:09.047851 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:09.047880 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:09.048293 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:09.048555 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .GetState
	I0120 13:06:09.050088 1020127 status.go:371] ha-285060-m04 host status = "Running" (err=<nil>)
	I0120 13:06:09.050114 1020127 host.go:66] Checking if "ha-285060-m04" exists ...
	I0120 13:06:09.050589 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:09.050647 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:09.067736 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I0120 13:06:09.068325 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:09.068897 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:09.068929 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:09.069265 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:09.069443 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .GetIP
	I0120 13:06:09.072271 1020127 main.go:141] libmachine: (ha-285060-m04) DBG | domain ha-285060-m04 has defined MAC address 52:54:00:2f:7e:ea in network mk-ha-285060
	I0120 13:06:09.072779 1020127 main.go:141] libmachine: (ha-285060-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:7e:ea", ip: ""} in network mk-ha-285060: {Iface:virbr1 ExpiryTime:2025-01-20 14:03:41 +0000 UTC Type:0 Mac:52:54:00:2f:7e:ea Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-285060-m04 Clientid:01:52:54:00:2f:7e:ea}
	I0120 13:06:09.072806 1020127 main.go:141] libmachine: (ha-285060-m04) DBG | domain ha-285060-m04 has defined IP address 192.168.39.55 and MAC address 52:54:00:2f:7e:ea in network mk-ha-285060
	I0120 13:06:09.073038 1020127 host.go:66] Checking if "ha-285060-m04" exists ...
	I0120 13:06:09.073395 1020127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:06:09.073440 1020127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:06:09.090485 1020127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
	I0120 13:06:09.091197 1020127 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:06:09.091766 1020127 main.go:141] libmachine: Using API Version  1
	I0120 13:06:09.091793 1020127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:06:09.092238 1020127 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:06:09.092480 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .DriverName
	I0120 13:06:09.092701 1020127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:06:09.092725 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .GetSSHHostname
	I0120 13:06:09.096187 1020127 main.go:141] libmachine: (ha-285060-m04) DBG | domain ha-285060-m04 has defined MAC address 52:54:00:2f:7e:ea in network mk-ha-285060
	I0120 13:06:09.096697 1020127 main.go:141] libmachine: (ha-285060-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:7e:ea", ip: ""} in network mk-ha-285060: {Iface:virbr1 ExpiryTime:2025-01-20 14:03:41 +0000 UTC Type:0 Mac:52:54:00:2f:7e:ea Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-285060-m04 Clientid:01:52:54:00:2f:7e:ea}
	I0120 13:06:09.096741 1020127 main.go:141] libmachine: (ha-285060-m04) DBG | domain ha-285060-m04 has defined IP address 192.168.39.55 and MAC address 52:54:00:2f:7e:ea in network mk-ha-285060
	I0120 13:06:09.096882 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .GetSSHPort
	I0120 13:06:09.097080 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .GetSSHKeyPath
	I0120 13:06:09.097275 1020127 main.go:141] libmachine: (ha-285060-m04) Calling .GetSSHUsername
	I0120 13:06:09.097424 1020127 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/ha-285060-m04/id_rsa Username:docker}
	I0120 13:06:09.182349 1020127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:06:09.202555 1020127 status.go:176] ha-285060-m04 status: &{Name:ha-285060-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 node start m02 -v=7 --alsologtostderr
E0120 13:06:40.516209 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-285060 node start m02 -v=7 --alsologtostderr: (45.871563069s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (46.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (486.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-285060 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-285060 -v=7 --alsologtostderr
E0120 13:06:59.121350 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:15.261594 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:42.962879 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-285060 -v=7 --alsologtostderr: (4m34.350128808s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-285060 --wait=true -v=7 --alsologtostderr
E0120 13:11:40.517058 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:13:03.591791 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:14:15.260839 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-285060 --wait=true -v=7 --alsologtostderr: (3m32.458638636s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-285060
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (486.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-285060 node delete m03 -v=7 --alsologtostderr: (6.581476935s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (273.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 stop -v=7 --alsologtostderr
E0120 13:16:40.517092 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:19:15.261321 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-285060 stop -v=7 --alsologtostderr: (4m32.958290002s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr: exit status 7 (117.158842ms)

                                                
                                                
-- stdout --
	ha-285060
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-285060-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-285060-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:19:45.632209 1024709 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:19:45.632344 1024709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:19:45.632352 1024709 out.go:358] Setting ErrFile to fd 2...
	I0120 13:19:45.632356 1024709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:19:45.632566 1024709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 13:19:45.632737 1024709 out.go:352] Setting JSON to false
	I0120 13:19:45.632769 1024709 mustload.go:65] Loading cluster: ha-285060
	I0120 13:19:45.632885 1024709 notify.go:220] Checking for updates...
	I0120 13:19:45.633236 1024709 config.go:182] Loaded profile config "ha-285060": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:19:45.633266 1024709 status.go:174] checking status of ha-285060 ...
	I0120 13:19:45.633727 1024709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:19:45.633775 1024709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:19:45.650253 1024709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0120 13:19:45.650759 1024709 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:19:45.651351 1024709 main.go:141] libmachine: Using API Version  1
	I0120 13:19:45.651377 1024709 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:19:45.651868 1024709 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:19:45.652145 1024709 main.go:141] libmachine: (ha-285060) Calling .GetState
	I0120 13:19:45.653868 1024709 status.go:371] ha-285060 host status = "Stopped" (err=<nil>)
	I0120 13:19:45.653887 1024709 status.go:384] host is not running, skipping remaining checks
	I0120 13:19:45.653898 1024709 status.go:176] ha-285060 status: &{Name:ha-285060 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:19:45.653942 1024709 status.go:174] checking status of ha-285060-m02 ...
	I0120 13:19:45.654262 1024709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:19:45.654291 1024709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:19:45.670067 1024709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I0120 13:19:45.670627 1024709 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:19:45.671274 1024709 main.go:141] libmachine: Using API Version  1
	I0120 13:19:45.671300 1024709 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:19:45.671627 1024709 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:19:45.671815 1024709 main.go:141] libmachine: (ha-285060-m02) Calling .GetState
	I0120 13:19:45.673581 1024709 status.go:371] ha-285060-m02 host status = "Stopped" (err=<nil>)
	I0120 13:19:45.673600 1024709 status.go:384] host is not running, skipping remaining checks
	I0120 13:19:45.673606 1024709 status.go:176] ha-285060-m02 status: &{Name:ha-285060-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:19:45.673629 1024709 status.go:174] checking status of ha-285060-m04 ...
	I0120 13:19:45.673961 1024709 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:19:45.674018 1024709 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:19:45.690266 1024709 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I0120 13:19:45.690759 1024709 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:19:45.691464 1024709 main.go:141] libmachine: Using API Version  1
	I0120 13:19:45.691516 1024709 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:19:45.691921 1024709 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:19:45.692177 1024709 main.go:141] libmachine: (ha-285060-m04) Calling .GetState
	I0120 13:19:45.694188 1024709 status.go:371] ha-285060-m04 host status = "Stopped" (err=<nil>)
	I0120 13:19:45.694210 1024709 status.go:384] host is not running, skipping remaining checks
	I0120 13:19:45.694216 1024709 status.go:176] ha-285060-m04 status: &{Name:ha-285060-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (273.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (133.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-285060 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 13:20:38.325225 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:40.517003 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-285060 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m12.327116297s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (133.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-285060 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-285060 --control-plane -v=7 --alsologtostderr: (1m16.247092787s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-285060 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-507130 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0120 13:24:15.264985 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-507130 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m0.371234489s)
--- PASS: TestJSONOutput/start/Command (60.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-507130 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-507130 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-507130 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-507130 --output=json --user=testUser: (7.361887678s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-560854 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-560854 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.836148ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d70b77ca-9d26-46b7-a4e9-dac6ab7b718a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-560854] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d37b7ed-e8f8-4e59-824d-b67eb3dc259a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20242"}}
	{"specversion":"1.0","id":"503fd731-7cee-4149-be2c-87249307df2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ac7706c9-930f-4b9d-b580-52e67ba13514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig"}}
	{"specversion":"1.0","id":"37f724d8-b1c4-45c2-af70-039f0fed86fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube"}}
	{"specversion":"1.0","id":"d6da63e2-f4fc-406b-841e-5cb79d2a4a6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"69553385-d36a-4795-9d37-55e96e10a29c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f8140b34-a333-4cf5-8690-1752b4741e30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-560854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-560854
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-512230 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-512230 --driver=kvm2  --container-runtime=containerd: (44.912972425s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-529245 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-529245 --driver=kvm2  --container-runtime=containerd: (45.459353489s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-512230
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-529245
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-529245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-529245
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-529245: (1.024598819s)
helpers_test.go:175: Cleaning up "first-512230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-512230
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-512230: (1.030479575s)
--- PASS: TestMinikubeProfile (93.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-199713 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-199713 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.71539908s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-199713 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-199713 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-224806 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0120 13:26:40.516171 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-224806 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.06355122s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224806 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224806 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-199713 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224806 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224806 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-224806
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-224806: (1.332607993s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-224806
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-224806: (23.406866648s)
--- PASS: TestMountStart/serial/RestartStopped (24.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224806 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224806 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-316969 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0120 13:29:15.261240 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-316969 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m3.033196361s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-316969 -- rollout status deployment/busybox: (3.702275731s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-bmkzw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-t25qf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-bmkzw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-t25qf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-bmkzw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-t25qf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-bmkzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-bmkzw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-t25qf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-316969 -- exec busybox-58667487b6-t25qf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-316969 -v 3 --alsologtostderr
E0120 13:29:43.593492 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-316969 -v 3 --alsologtostderr: (52.759036709s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.39s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-316969 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp testdata/cp-test.txt multinode-316969:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3240958165/001/cp-test_multinode-316969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969:/home/docker/cp-test.txt multinode-316969-m02:/home/docker/cp-test_multinode-316969_multinode-316969-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m02 "sudo cat /home/docker/cp-test_multinode-316969_multinode-316969-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969:/home/docker/cp-test.txt multinode-316969-m03:/home/docker/cp-test_multinode-316969_multinode-316969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m03 "sudo cat /home/docker/cp-test_multinode-316969_multinode-316969-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp testdata/cp-test.txt multinode-316969-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3240958165/001/cp-test_multinode-316969-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969-m02:/home/docker/cp-test.txt multinode-316969:/home/docker/cp-test_multinode-316969-m02_multinode-316969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969 "sudo cat /home/docker/cp-test_multinode-316969-m02_multinode-316969.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969-m02:/home/docker/cp-test.txt multinode-316969-m03:/home/docker/cp-test_multinode-316969-m02_multinode-316969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m03 "sudo cat /home/docker/cp-test_multinode-316969-m02_multinode-316969-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp testdata/cp-test.txt multinode-316969-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3240958165/001/cp-test_multinode-316969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969-m03:/home/docker/cp-test.txt multinode-316969:/home/docker/cp-test_multinode-316969-m03_multinode-316969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969 "sudo cat /home/docker/cp-test_multinode-316969-m03_multinode-316969.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 cp multinode-316969-m03:/home/docker/cp-test.txt multinode-316969-m02:/home/docker/cp-test_multinode-316969-m03_multinode-316969-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 ssh -n multinode-316969-m02 "sudo cat /home/docker/cp-test_multinode-316969-m03_multinode-316969-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-316969 node stop m03: (1.459191791s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-316969 status: exit status 7 (473.738841ms)

                                                
                                                
-- stdout --
	multinode-316969
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-316969-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-316969-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr: exit status 7 (450.381694ms)

                                                
                                                
-- stdout --
	multinode-316969
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-316969-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-316969-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:30:44.824529 1032486 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:30:44.824662 1032486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:30:44.824674 1032486 out.go:358] Setting ErrFile to fd 2...
	I0120 13:30:44.824680 1032486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:30:44.824889 1032486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 13:30:44.825163 1032486 out.go:352] Setting JSON to false
	I0120 13:30:44.825215 1032486 mustload.go:65] Loading cluster: multinode-316969
	I0120 13:30:44.825374 1032486 notify.go:220] Checking for updates...
	I0120 13:30:44.825723 1032486 config.go:182] Loaded profile config "multinode-316969": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:30:44.825757 1032486 status.go:174] checking status of multinode-316969 ...
	I0120 13:30:44.826239 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:44.826297 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:44.845954 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I0120 13:30:44.846556 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:44.847258 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:44.847293 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:44.847716 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:44.847986 1032486 main.go:141] libmachine: (multinode-316969) Calling .GetState
	I0120 13:30:44.850115 1032486 status.go:371] multinode-316969 host status = "Running" (err=<nil>)
	I0120 13:30:44.850137 1032486 host.go:66] Checking if "multinode-316969" exists ...
	I0120 13:30:44.850571 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:44.850641 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:44.867111 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I0120 13:30:44.867722 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:44.868326 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:44.868345 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:44.868788 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:44.869022 1032486 main.go:141] libmachine: (multinode-316969) Calling .GetIP
	I0120 13:30:44.872356 1032486 main.go:141] libmachine: (multinode-316969) DBG | domain multinode-316969 has defined MAC address 52:54:00:1f:bb:8d in network mk-multinode-316969
	I0120 13:30:44.872904 1032486 main.go:141] libmachine: (multinode-316969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:8d", ip: ""} in network mk-multinode-316969: {Iface:virbr1 ExpiryTime:2025-01-20 14:27:47 +0000 UTC Type:0 Mac:52:54:00:1f:bb:8d Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-316969 Clientid:01:52:54:00:1f:bb:8d}
	I0120 13:30:44.872942 1032486 main.go:141] libmachine: (multinode-316969) DBG | domain multinode-316969 has defined IP address 192.168.39.245 and MAC address 52:54:00:1f:bb:8d in network mk-multinode-316969
	I0120 13:30:44.873273 1032486 host.go:66] Checking if "multinode-316969" exists ...
	I0120 13:30:44.873672 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:44.873734 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:44.890037 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0120 13:30:44.890532 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:44.891123 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:44.891150 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:44.891535 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:44.891830 1032486 main.go:141] libmachine: (multinode-316969) Calling .DriverName
	I0120 13:30:44.892100 1032486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:30:44.892133 1032486 main.go:141] libmachine: (multinode-316969) Calling .GetSSHHostname
	I0120 13:30:44.896488 1032486 main.go:141] libmachine: (multinode-316969) DBG | domain multinode-316969 has defined MAC address 52:54:00:1f:bb:8d in network mk-multinode-316969
	I0120 13:30:44.896971 1032486 main.go:141] libmachine: (multinode-316969) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:bb:8d", ip: ""} in network mk-multinode-316969: {Iface:virbr1 ExpiryTime:2025-01-20 14:27:47 +0000 UTC Type:0 Mac:52:54:00:1f:bb:8d Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-316969 Clientid:01:52:54:00:1f:bb:8d}
	I0120 13:30:44.897006 1032486 main.go:141] libmachine: (multinode-316969) DBG | domain multinode-316969 has defined IP address 192.168.39.245 and MAC address 52:54:00:1f:bb:8d in network mk-multinode-316969
	I0120 13:30:44.897254 1032486 main.go:141] libmachine: (multinode-316969) Calling .GetSSHPort
	I0120 13:30:44.897578 1032486 main.go:141] libmachine: (multinode-316969) Calling .GetSSHKeyPath
	I0120 13:30:44.897788 1032486 main.go:141] libmachine: (multinode-316969) Calling .GetSSHUsername
	I0120 13:30:44.897975 1032486 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/multinode-316969/id_rsa Username:docker}
	I0120 13:30:44.976980 1032486 ssh_runner.go:195] Run: systemctl --version
	I0120 13:30:44.985980 1032486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:30:45.002968 1032486 kubeconfig.go:125] found "multinode-316969" server: "https://192.168.39.245:8443"
	I0120 13:30:45.003012 1032486 api_server.go:166] Checking apiserver status ...
	I0120 13:30:45.003046 1032486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:30:45.018738 1032486 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1070/cgroup
	W0120 13:30:45.030160 1032486 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1070/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:30:45.030225 1032486 ssh_runner.go:195] Run: ls
	I0120 13:30:45.035362 1032486 api_server.go:253] Checking apiserver healthz at https://192.168.39.245:8443/healthz ...
	I0120 13:30:45.040427 1032486 api_server.go:279] https://192.168.39.245:8443/healthz returned 200:
	ok
	I0120 13:30:45.040464 1032486 status.go:463] multinode-316969 apiserver status = Running (err=<nil>)
	I0120 13:30:45.040478 1032486 status.go:176] multinode-316969 status: &{Name:multinode-316969 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:30:45.040497 1032486 status.go:174] checking status of multinode-316969-m02 ...
	I0120 13:30:45.040809 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:45.040849 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:45.056742 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
	I0120 13:30:45.057354 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:45.058025 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:45.058048 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:45.058416 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:45.058641 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .GetState
	I0120 13:30:45.060111 1032486 status.go:371] multinode-316969-m02 host status = "Running" (err=<nil>)
	I0120 13:30:45.060129 1032486 host.go:66] Checking if "multinode-316969-m02" exists ...
	I0120 13:30:45.060453 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:45.060502 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:45.077066 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0120 13:30:45.077536 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:45.078048 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:45.078066 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:45.078370 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:45.078537 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .GetIP
	I0120 13:30:45.081555 1032486 main.go:141] libmachine: (multinode-316969-m02) DBG | domain multinode-316969-m02 has defined MAC address 52:54:00:15:e1:6f in network mk-multinode-316969
	I0120 13:30:45.082004 1032486 main.go:141] libmachine: (multinode-316969-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:e1:6f", ip: ""} in network mk-multinode-316969: {Iface:virbr1 ExpiryTime:2025-01-20 14:28:58 +0000 UTC Type:0 Mac:52:54:00:15:e1:6f Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-316969-m02 Clientid:01:52:54:00:15:e1:6f}
	I0120 13:30:45.082032 1032486 main.go:141] libmachine: (multinode-316969-m02) DBG | domain multinode-316969-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:15:e1:6f in network mk-multinode-316969
	I0120 13:30:45.082217 1032486 host.go:66] Checking if "multinode-316969-m02" exists ...
	I0120 13:30:45.082530 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:45.082575 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:45.099095 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43397
	I0120 13:30:45.099574 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:45.100029 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:45.100049 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:45.100373 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:45.100609 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .DriverName
	I0120 13:30:45.100841 1032486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:30:45.100873 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .GetSSHHostname
	I0120 13:30:45.103670 1032486 main.go:141] libmachine: (multinode-316969-m02) DBG | domain multinode-316969-m02 has defined MAC address 52:54:00:15:e1:6f in network mk-multinode-316969
	I0120 13:30:45.104148 1032486 main.go:141] libmachine: (multinode-316969-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:e1:6f", ip: ""} in network mk-multinode-316969: {Iface:virbr1 ExpiryTime:2025-01-20 14:28:58 +0000 UTC Type:0 Mac:52:54:00:15:e1:6f Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-316969-m02 Clientid:01:52:54:00:15:e1:6f}
	I0120 13:30:45.104202 1032486 main.go:141] libmachine: (multinode-316969-m02) DBG | domain multinode-316969-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:15:e1:6f in network mk-multinode-316969
	I0120 13:30:45.104319 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .GetSSHPort
	I0120 13:30:45.104558 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .GetSSHKeyPath
	I0120 13:30:45.104703 1032486 main.go:141] libmachine: (multinode-316969-m02) Calling .GetSSHUsername
	I0120 13:30:45.104865 1032486 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/multinode-316969-m02/id_rsa Username:docker}
	I0120 13:30:45.184559 1032486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:30:45.200474 1032486 status.go:176] multinode-316969-m02 status: &{Name:multinode-316969-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:30:45.200519 1032486 status.go:174] checking status of multinode-316969-m03 ...
	I0120 13:30:45.200892 1032486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:30:45.200941 1032486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:30:45.217436 1032486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0120 13:30:45.217934 1032486 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:30:45.218495 1032486 main.go:141] libmachine: Using API Version  1
	I0120 13:30:45.218525 1032486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:30:45.218822 1032486 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:30:45.219046 1032486 main.go:141] libmachine: (multinode-316969-m03) Calling .GetState
	I0120 13:30:45.220733 1032486 status.go:371] multinode-316969-m03 host status = "Stopped" (err=<nil>)
	I0120 13:30:45.220749 1032486 status.go:384] host is not running, skipping remaining checks
	I0120 13:30:45.220755 1032486 status.go:176] multinode-316969-m03 status: &{Name:multinode-316969-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-316969 node start m03 -v=7 --alsologtostderr: (35.043220149s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (316.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-316969
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-316969
E0120 13:31:40.517690 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:34:15.265008 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-316969: (3m3.126327164s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-316969 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-316969 --wait=true -v=8 --alsologtostderr: (2m12.86885999s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-316969
--- PASS: TestMultiNode/serial/RestartKeepsNodes (316.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-316969 node delete m03: (1.740210261s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 stop
E0120 13:36:40.516481 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:37:18.329356 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:39:15.265055 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-316969 stop: (3m1.762140864s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-316969 status: exit status 7 (97.170185ms)

                                                
                                                
-- stdout --
	multinode-316969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-316969-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr: exit status 7 (98.239813ms)

                                                
                                                
-- stdout --
	multinode-316969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-316969-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:39:41.262716 1035234 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:39:41.262870 1035234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:39:41.262877 1035234 out.go:358] Setting ErrFile to fd 2...
	I0120 13:39:41.262881 1035234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:39:41.263067 1035234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 13:39:41.263244 1035234 out.go:352] Setting JSON to false
	I0120 13:39:41.263279 1035234 mustload.go:65] Loading cluster: multinode-316969
	I0120 13:39:41.263438 1035234 notify.go:220] Checking for updates...
	I0120 13:39:41.263667 1035234 config.go:182] Loaded profile config "multinode-316969": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:39:41.263692 1035234 status.go:174] checking status of multinode-316969 ...
	I0120 13:39:41.264141 1035234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:39:41.264191 1035234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:39:41.284794 1035234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40923
	I0120 13:39:41.285397 1035234 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:39:41.286073 1035234 main.go:141] libmachine: Using API Version  1
	I0120 13:39:41.286107 1035234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:39:41.286603 1035234 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:39:41.286876 1035234 main.go:141] libmachine: (multinode-316969) Calling .GetState
	I0120 13:39:41.288681 1035234 status.go:371] multinode-316969 host status = "Stopped" (err=<nil>)
	I0120 13:39:41.288697 1035234 status.go:384] host is not running, skipping remaining checks
	I0120 13:39:41.288703 1035234 status.go:176] multinode-316969 status: &{Name:multinode-316969 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:39:41.288747 1035234 status.go:174] checking status of multinode-316969-m02 ...
	I0120 13:39:41.289074 1035234 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0120 13:39:41.289147 1035234 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:39:41.305031 1035234 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0120 13:39:41.305477 1035234 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:39:41.306071 1035234 main.go:141] libmachine: Using API Version  1
	I0120 13:39:41.306104 1035234 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:39:41.306463 1035234 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:39:41.306731 1035234 main.go:141] libmachine: (multinode-316969-m02) Calling .GetState
	I0120 13:39:41.308438 1035234 status.go:371] multinode-316969-m02 host status = "Stopped" (err=<nil>)
	I0120 13:39:41.308457 1035234 status.go:384] host is not running, skipping remaining checks
	I0120 13:39:41.308465 1035234 status.go:176] multinode-316969-m02 status: &{Name:multinode-316969-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-316969 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-316969 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.288837491s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-316969 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-316969
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-316969-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-316969-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (69.464083ms)

                                                
                                                
-- stdout --
	* [multinode-316969-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-316969-m02' is duplicated with machine name 'multinode-316969-m02' in profile 'multinode-316969'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-316969-m03 --driver=kvm2  --container-runtime=containerd
E0120 13:41:40.517204 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-316969-m03 --driver=kvm2  --container-runtime=containerd: (44.469142597s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-316969
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-316969: exit status 80 (237.409111ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-316969 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-316969-m03 already exists in multinode-316969-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-316969-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.67s)

                                                
                                    
x
+
TestPreload (159.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-034334 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-034334 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m28.958375527s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-034334 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-034334 image pull gcr.io/k8s-minikube/busybox: (2.778940109s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-034334
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-034334: (6.596408793s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-034334 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0120 13:44:15.261246 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-034334 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (59.995302331s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-034334 image list
helpers_test.go:175: Cleaning up "test-preload-034334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-034334
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-034334: (1.10562825s)
--- PASS: TestPreload (159.66s)

                                                
                                    
x
+
TestScheduledStopUnix (115.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-737332 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-737332 --memory=2048 --driver=kvm2  --container-runtime=containerd: (43.282166453s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-737332 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-737332 -n scheduled-stop-737332
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-737332 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 13:45:38.718130 1006263 retry.go:31] will retry after 55.742µs: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.719305 1006263 retry.go:31] will retry after 216.917µs: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.720452 1006263 retry.go:31] will retry after 142.814µs: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.721611 1006263 retry.go:31] will retry after 419.621µs: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.722765 1006263 retry.go:31] will retry after 597.72µs: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.723923 1006263 retry.go:31] will retry after 1.112776ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.726584 1006263 retry.go:31] will retry after 575.489µs: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.727763 1006263 retry.go:31] will retry after 1.927039ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.729971 1006263 retry.go:31] will retry after 3.701375ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.734231 1006263 retry.go:31] will retry after 2.638778ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.737458 1006263 retry.go:31] will retry after 6.460363ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.744694 1006263 retry.go:31] will retry after 12.35077ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.757964 1006263 retry.go:31] will retry after 17.905071ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.776281 1006263 retry.go:31] will retry after 17.581912ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
I0120 13:45:38.794759 1006263 retry.go:31] will retry after 19.479723ms: open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/scheduled-stop-737332/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-737332 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-737332 -n scheduled-stop-737332
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-737332
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-737332 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0120 13:46:23.597702 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:46:40.520634 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-737332
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-737332: exit status 7 (77.937945ms)

                                                
                                                
-- stdout --
	scheduled-stop-737332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-737332 -n scheduled-stop-737332
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-737332 -n scheduled-stop-737332: exit status 7 (74.288321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-737332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-737332
--- PASS: TestScheduledStopUnix (115.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (180.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.622787358 start -p running-upgrade-067531 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.622787358 start -p running-upgrade-067531 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m42.549745945s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-067531 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-067531 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m14.253593456s)
helpers_test.go:175: Cleaning up "running-upgrade-067531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-067531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-067531: (1.187905791s)
--- PASS: TestRunningBinaryUpgrade (180.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (148.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.632022862s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-020885
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-020885: (1.611608427s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-020885 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-020885 status --format={{.Host}}: exit status 7 (83.769241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (36.846861778s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-020885 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (94.793853ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-020885] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-020885
	    minikube start -p kubernetes-upgrade-020885 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0208852 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-020885 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-020885 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (39.645334425s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-020885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-020885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-020885: (1.167243196s)
--- PASS: TestKubernetesUpgrade (148.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680753 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-680753 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (98.883519ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-680753] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (122.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680753 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680753 --driver=kvm2  --container-runtime=containerd: (2m2.535578632s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-680753 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (122.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-838971 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-838971 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (116.193737ms)

                                                
                                                
-- stdout --
	* [false-838971] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:47:57.804552 1040045 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:47:57.804811 1040045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:47:57.804824 1040045 out.go:358] Setting ErrFile to fd 2...
	I0120 13:47:57.804828 1040045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:47:57.805077 1040045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
	I0120 13:47:57.805696 1040045 out.go:352] Setting JSON to false
	I0120 13:47:57.806703 1040045 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":12620,"bootTime":1737368258,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:47:57.806771 1040045 start.go:139] virtualization: kvm guest
	I0120 13:47:57.809416 1040045 out.go:177] * [false-838971] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:47:57.811038 1040045 notify.go:220] Checking for updates...
	I0120 13:47:57.811143 1040045 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:47:57.812757 1040045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:47:57.814269 1040045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
	I0120 13:47:57.816031 1040045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
	I0120 13:47:57.817738 1040045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:47:57.819537 1040045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:47:57.821525 1040045 config.go:182] Loaded profile config "NoKubernetes-680753": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:47:57.821644 1040045 config.go:182] Loaded profile config "cert-expiration-700273": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:47:57.821725 1040045 config.go:182] Loaded profile config "force-systemd-env-694770": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:47:57.821828 1040045 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:47:57.859589 1040045 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 13:47:57.861678 1040045 start.go:297] selected driver: kvm2
	I0120 13:47:57.861708 1040045 start.go:901] validating driver "kvm2" against <nil>
	I0120 13:47:57.861725 1040045 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:47:57.864304 1040045 out.go:201] 
	W0120 13:47:57.865751 1040045 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0120 13:47:57.867036 1040045 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-838971 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-838971" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-838971

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-838971"

                                                
                                                
----------------------- debugLogs end: false-838971 [took: 3.256127325s] --------------------------------
helpers_test.go:175: Cleaning up "false-838971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-838971
--- PASS: TestNetworkPlugins/group/false (3.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (53.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680753 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0120 13:49:15.264597 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680753 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (51.802525342s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-680753 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-680753 status -o json: exit status 2 (285.290594ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-680753","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-680753
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-680753: (1.08716337s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (53.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680753 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680753 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (41.471314697s)
--- PASS: TestNoKubernetes/serial/Start (41.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-680753 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-680753 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.521637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-680753
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-680753: (1.573169701s)
--- PASS: TestNoKubernetes/serial/Stop (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-680753 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-680753 --driver=kvm2  --container-runtime=containerd: (28.832876484s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1279560020 start -p stopped-upgrade-843650 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1279560020 start -p stopped-upgrade-843650 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (56.273593388s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1279560020 -p stopped-upgrade-843650 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1279560020 -p stopped-upgrade-843650 stop: (2.00340236s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-843650 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-843650 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.268796361s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-680753 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-680753 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.263876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPause/serial/Start (102.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-874259 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0120 13:51:40.516853 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-874259 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m42.812709687s)
--- PASS: TestPause/serial/Start (102.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m43.943743254s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (71.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-874259 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-874259 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m11.585587074s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (71.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-843650
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-843650: (1.060296826s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m31.047457812s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0120 13:53:58.331541 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m59.994413613s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-838971 "pgrep -a kubelet"
I0120 13:54:05.118024 1006263 config.go:182] Loaded profile config "auto-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-838971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2vfvn" [8acfab0e-e712-49af-9a5e-a9e7f84bf7ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2vfvn" [8acfab0e-e712-49af-9a5e-a9e7f84bf7ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005789874s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-874259 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-874259 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-874259 --output=json --layout=cluster: exit status 2 (325.183595ms)

                                                
                                                
-- stdout --
	{"Name":"pause-874259","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-874259","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-874259 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.87s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-874259 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-874259 --alsologtostderr -v=5: (1.029528246s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-874259 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-874259 --alsologtostderr -v=5: (1.139994066s)
--- PASS: TestPause/serial/DeletePaused (1.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.324364695s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0120 13:54:15.261864 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m21.657076465s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-838971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m11.379499632s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cbtgl" [7867ddf6-3b30-4194-ad20-bdc3c9e72d78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004497241s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-838971 "pgrep -a kubelet"
I0120 13:54:42.080400 1006263 config.go:182] Loaded profile config "kindnet-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-838971 replace --force -f testdata/netcat-deployment.yaml
I0120 13:54:42.324026 1006263 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fkl4h" [7aa7bb06-3e3e-4d1e-81bc-bd54a61a29c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fkl4h" [7aa7bb06-3e3e-4d1e-81bc-bd54a61a29c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004687211s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-838971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m22.622210369s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hbdm8" [3fd66f5c-eb81-412b-99a1-c8c04d326f18] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00661016s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-838971 "pgrep -a kubelet"
I0120 13:55:28.610878 1006263 config.go:182] Loaded profile config "calico-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-838971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zr7bc" [09f3eee8-b62d-4a50-944f-cf6af0b2a8bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zr7bc" [09f3eee8-b62d-4a50-944f-cf6af0b2a8bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004698087s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-838971 "pgrep -a kubelet"
I0120 13:55:35.970519 1006263 config.go:182] Loaded profile config "custom-flannel-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-838971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-j2nf2" [fe2ed75f-5da3-4735-be82-7a9c209d4a54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-j2nf2" [fe2ed75f-5da3-4735-be82-7a9c209d4a54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.008431077s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-838971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-838971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-838971 exec deployment/netcat -- nslookup kubernetes.default
I0120 13:55:45.429028 1006263 config.go:182] Loaded profile config "enable-default-cni-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-838971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dr4nd" [282c9b70-e8a4-4fcb-b162-497976fc1f34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dr4nd" [282c9b70-e8a4-4fcb-b162-497976fc1f34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004310024s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-838971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m4.314736485s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-838971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (160.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-743378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-743378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m40.429239552s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (160.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (121.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-097312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-097312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (2m1.955419728s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (121.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gk8h7" [e6f40128-3454-4da5-86a1-9f0599b3bc2e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006095019s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-838971 "pgrep -a kubelet"
I0120 13:56:39.509498 1006263 config.go:182] Loaded profile config "flannel-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-838971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hljxq" [79dacb11-d529-48e5-8e74-98b1aea6a02f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 13:56:40.516254 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-hljxq" [79dacb11-d529-48e5-8e74-98b1aea6a02f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004267986s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-838971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-838971 "pgrep -a kubelet"
I0120 13:57:03.173269 1006263 config.go:182] Loaded profile config "bridge-838971": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-838971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jtgpn" [f23c8ca2-1cf6-45ab-8504-7794887f33c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jtgpn" [f23c8ca2-1cf6-45ab-8504-7794887f33c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004701692s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m17.0115671s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-838971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-838971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0120 14:05:22.368614 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-901416 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-901416 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m28.628433439s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-097312 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d3e4d9ac-12bf-4c98-a3eb-2fb12a176126] Pending
helpers_test.go:344: "busybox" [d3e4d9ac-12bf-4c98-a3eb-2fb12a176126] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d3e4d9ac-12bf-4c98-a3eb-2fb12a176126] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006156893s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-097312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-553677 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [408bdc7b-26ca-43df-b6ff-678c206e40d0] Pending
helpers_test.go:344: "busybox" [408bdc7b-26ca-43df-b6ff-678c206e40d0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [408bdc7b-26ca-43df-b6ff-678c206e40d0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003622277s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-553677 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-097312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-097312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.209686089s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-097312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-097312 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-097312 --alsologtostderr -v=3: (1m31.095398027s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-553677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-553677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.038117927s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-553677 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-553677 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-553677 --alsologtostderr -v=3: (1m30.88471163s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-743378 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff82c5c2-862c-4a13-8080-67bee756be22] Pending
helpers_test.go:344: "busybox" [ff82c5c2-862c-4a13-8080-67bee756be22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ff82c5c2-862c-4a13-8080-67bee756be22] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004495737s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-743378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-743378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-743378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-743378 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-743378 --alsologtostderr -v=3: (1m31.077386337s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-901416 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c080aaf-6a77-4051-8eeb-a77b6a67c1e5] Pending
helpers_test.go:344: "busybox" [5c080aaf-6a77-4051-8eeb-a77b6a67c1e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0120 13:59:05.462656 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:05.469110 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:05.480651 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:05.502734 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:05.544294 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:05.625807 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:05.787670 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:06.110036 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5c080aaf-6a77-4051-8eeb-a77b6a67c1e5] Running
E0120 13:59:06.751424 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:08.033296 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:10.594886 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003910222s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-901416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-901416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-901416 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-901416 --alsologtostderr -v=3
E0120 13:59:15.261714 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:15.717248 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:25.959150 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:35.741321 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:35.747809 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:35.759402 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:35.780995 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:35.823153 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:35.904744 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:36.066356 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:36.387687 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:37.029071 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:38.310404 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:40.872463 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:45.994817 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:46.440592 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:59:56.236231 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-901416 --alsologtostderr -v=3: (1m31.384910493s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-097312 -n no-preload-097312
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-097312 -n no-preload-097312: exit status 7 (80.190816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-097312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-097312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-097312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (5m3.407982087s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-097312 -n no-preload-097312
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-553677 -n embed-certs-553677
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-553677 -n embed-certs-553677: exit status 7 (81.566862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-553677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743378 -n old-k8s-version-743378
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743378 -n old-k8s-version-743378: exit status 7 (80.697277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-743378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (185.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-743378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0120 14:00:32.622547 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.232794 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.239335 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.250867 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.272446 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.314017 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.396168 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.557825 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:36.879851 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:37.521755 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:38.803663 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:41.365346 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:42.864895 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-743378 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m4.719017165s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-743378 -n old-k8s-version-743378
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (185.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416: exit status 7 (84.076414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-901416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (322.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-901416 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:00:46.251569 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.258014 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.269553 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.291151 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.332736 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.414922 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.487492 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.576529 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:46.898230 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:47.540395 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:48.822101 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:51.383830 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:56.505307 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:56.729868 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:57.679492 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:03.346730 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:06.747166 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:17.211786 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:27.228675 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.255171 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.261694 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.274154 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.295873 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.337412 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.419163 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.581061 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:33.903125 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:34.544793 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:35.826432 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:38.388564 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:40.516156 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:43.510546 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:44.308101 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:49.323692 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:53.752899 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:58.173564 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.509442 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.515954 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.527457 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.548912 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.590411 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.671891 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:03.833458 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:04.154916 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:04.796501 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:06.078893 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:08.190103 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:08.641037 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:13.763283 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:14.235037 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:19.601144 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:24.005316 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:44.486838 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:02:55.197247 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:03:03.599900 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/addons-843002/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:03:06.230250 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:03:20.095617 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/custom-flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:03:25.449226 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:03:30.111620 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-901416 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (5m22.129678436s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (322.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g2vst" [62d9bc85-5b32-499e-8f89-bcc39cd7429e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003924012s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g2vst" [62d9bc85-5b32-499e-8f89-bcc39cd7429e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005419869s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-743378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-743378 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-743378 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743378 -n old-k8s-version-743378
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743378 -n old-k8s-version-743378: exit status 2 (271.80939ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-743378 -n old-k8s-version-743378
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-743378 -n old-k8s-version-743378: exit status 2 (270.096335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-743378 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-743378 -n old-k8s-version-743378
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-743378 -n old-k8s-version-743378
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-488874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:04:05.462405 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:04:15.261070 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/functional-817722/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:04:17.118992 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/flannel-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:04:33.165172 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:04:35.741383 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-488874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (49.396685718s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-488874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-488874 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.24646695s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-488874 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-488874 --alsologtostderr -v=3: (7.502274879s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-488874 -n newest-cni-488874
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-488874 -n newest-cni-488874: exit status 7 (76.125087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-488874 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-488874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:04:47.370974 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/bridge-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:05:03.443071 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-488874 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.0: (37.192362726s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-488874 -n newest-cni-488874
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pgfnf" [78b45450-49ad-46c8-a832-0680935e1d74] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004892508s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pgfnf" [78b45450-49ad-46c8-a832-0680935e1d74] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005067624s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-097312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-097312 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-097312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-097312 --alsologtostderr -v=1: (1.013156629s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-097312 -n no-preload-097312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-097312 -n no-preload-097312: exit status 2 (292.092406ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-097312 -n no-preload-097312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-097312 -n no-preload-097312: exit status 2 (320.071018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-097312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-097312 -n no-preload-097312
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-097312 -n no-preload-097312
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-488874 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-488874 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-488874 -n newest-cni-488874
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-488874 -n newest-cni-488874: exit status 2 (268.390367ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-488874 -n newest-cni-488874
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-488874 -n newest-cni-488874: exit status 2 (272.254041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-488874 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-488874 -n newest-cni-488874
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-488874 -n newest-cni-488874
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-chx7l" [e3cc2317-8580-420d-82d3-5d3f6b9a9378] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005630499s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-chx7l" [e3cc2317-8580-420d-82d3-5d3f6b9a9378] Running
E0120 14:06:13.953733 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/enable-default-cni-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004730695s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-901416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-901416 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-901416 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416: exit status 2 (261.031061ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416: exit status 2 (267.234059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-901416 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-901416 -n default-k8s-diff-port-901416
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    

Test skip (38/324)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.0/cached-images 0
15 TestDownloadOnly/v1.32.0/binaries 0
16 TestDownloadOnly/v1.32.0/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.6
265 TestNetworkPlugins/group/cilium 3.89
280 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-838971 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-838971" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-838971

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-838971"

                                                
                                                
----------------------- debugLogs end: kubenet-838971 [took: 3.421124908s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-838971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-838971
--- SKIP: TestNetworkPlugins/group/kubenet (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-838971 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-838971" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.158:8443
name: cert-expiration-700273
contexts:
- context:
cluster: cert-expiration-700273
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:48:01 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-700273
name: cert-expiration-700273
current-context: cert-expiration-700273
kind: Config
preferences: {}
users:
- name: cert-expiration-700273
user:
client-certificate: /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/cert-expiration-700273/client.crt
client-key: /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/cert-expiration-700273/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-838971

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-838971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838971"

                                                
                                                
----------------------- debugLogs end: cilium-838971 [took: 3.678937489s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-838971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-838971
--- SKIP: TestNetworkPlugins/group/cilium (3.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-093472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-093472
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard