Test Report: Docker_Linux_containerd_arm64 20427

                    
                      a480bdc5e776ed1bdb04039eceacb0c7aced7f2e:2025-02-17:38392
                    
                

Test fail (1/331)

Order failed test Duration
305 TestStartStop/group/old-k8s-version/serial/SecondStart 380.62
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0217 13:18:46.325259 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m15.425204774s)

                                                
                                                
-- stdout --
	* [old-k8s-version-684625] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-684625" primary control-plane node in "old-k8s-version-684625" cluster
	* Pulling base image v0.0.46-1739182054-20387 ...
	* Restarting existing docker container for "old-k8s-version-684625" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-684625 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 13:18:08.625965 2295157 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:18:08.626170 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:18:08.626197 2295157 out.go:358] Setting ErrFile to fd 2...
	I0217 13:18:08.626214 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:18:08.626489 2295157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 13:18:08.626893 2295157 out.go:352] Setting JSON to false
	I0217 13:18:08.627937 2295157 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":309452,"bootTime":1739488837,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 13:18:08.628033 2295157 start.go:139] virtualization:  
	I0217 13:18:08.631539 2295157 out.go:177] * [old-k8s-version-684625] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 13:18:08.635491 2295157 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 13:18:08.635560 2295157 notify.go:220] Checking for updates...
	I0217 13:18:08.641709 2295157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 13:18:08.645128 2295157 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 13:18:08.647976 2295157 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 13:18:08.651401 2295157 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 13:18:08.654466 2295157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 13:18:08.658109 2295157 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0217 13:18:08.661711 2295157 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0217 13:18:08.664628 2295157 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 13:18:08.718041 2295157 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 13:18:08.718242 2295157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:18:08.795607 2295157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-17 13:18:08.784630881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:18:08.795714 2295157 docker.go:318] overlay module found
	I0217 13:18:08.799208 2295157 out.go:177] * Using the docker driver based on existing profile
	I0217 13:18:08.802023 2295157 start.go:297] selected driver: docker
	I0217 13:18:08.802043 2295157 start.go:901] validating driver "docker" against &{Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 13:18:08.802166 2295157 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 13:18:08.802943 2295157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:18:08.868190 2295157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-17 13:18:08.85836252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:18:08.868577 2295157 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 13:18:08.868608 2295157 cni.go:84] Creating CNI manager for ""
	I0217 13:18:08.868646 2295157 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0217 13:18:08.868693 2295157 start.go:340] cluster config:
	{Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 13:18:08.872387 2295157 out.go:177] * Starting "old-k8s-version-684625" primary control-plane node in "old-k8s-version-684625" cluster
	I0217 13:18:08.874510 2295157 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0217 13:18:08.878108 2295157 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
	I0217 13:18:08.882190 2295157 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0217 13:18:08.882250 2295157 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0217 13:18:08.882264 2295157 cache.go:56] Caching tarball of preloaded images
	I0217 13:18:08.882280 2295157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0217 13:18:08.882359 2295157 preload.go:172] Found /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0217 13:18:08.882370 2295157 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0217 13:18:08.882482 2295157 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/config.json ...
	I0217 13:18:08.904671 2295157 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon, skipping pull
	I0217 13:18:08.904694 2295157 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in daemon, skipping load
	I0217 13:18:08.904714 2295157 cache.go:230] Successfully downloaded all kic artifacts
	I0217 13:18:08.904745 2295157 start.go:360] acquireMachinesLock for old-k8s-version-684625: {Name:mka6c369035b962d62683df0b54332779fc916c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 13:18:08.904823 2295157 start.go:364] duration metric: took 54.924µs to acquireMachinesLock for "old-k8s-version-684625"
	I0217 13:18:08.904848 2295157 start.go:96] Skipping create...Using existing machine configuration
	I0217 13:18:08.904857 2295157 fix.go:54] fixHost starting: 
	I0217 13:18:08.905154 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:08.922991 2295157 fix.go:112] recreateIfNeeded on old-k8s-version-684625: state=Stopped err=<nil>
	W0217 13:18:08.923026 2295157 fix.go:138] unexpected machine state, will restart: <nil>
	I0217 13:18:08.925705 2295157 out.go:177] * Restarting existing docker container for "old-k8s-version-684625" ...
	I0217 13:18:08.929383 2295157 cli_runner.go:164] Run: docker start old-k8s-version-684625
	I0217 13:18:09.334154 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:09.365477 2295157 kic.go:430] container "old-k8s-version-684625" state is running.
	I0217 13:18:09.365966 2295157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-684625
	I0217 13:18:09.393207 2295157 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/config.json ...
	I0217 13:18:09.393443 2295157 machine.go:93] provisionDockerMachine start ...
	I0217 13:18:09.393509 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:09.415323 2295157 main.go:141] libmachine: Using SSH client type: native
	I0217 13:18:09.415593 2295157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 50067 <nil> <nil>}
	I0217 13:18:09.415613 2295157 main.go:141] libmachine: About to run SSH command:
	hostname
	I0217 13:18:09.417981 2295157 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0217 13:18:12.553517 2295157 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-684625
	
	I0217 13:18:12.553545 2295157 ubuntu.go:169] provisioning hostname "old-k8s-version-684625"
	I0217 13:18:12.553626 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:12.578569 2295157 main.go:141] libmachine: Using SSH client type: native
	I0217 13:18:12.578824 2295157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 50067 <nil> <nil>}
	I0217 13:18:12.578843 2295157 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-684625 && echo "old-k8s-version-684625" | sudo tee /etc/hostname
	I0217 13:18:12.727273 2295157 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-684625
	
	I0217 13:18:12.727422 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:12.748467 2295157 main.go:141] libmachine: Using SSH client type: native
	I0217 13:18:12.748733 2295157 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x414ca0] 0x4174e0 <nil>  [] 0s} 127.0.0.1 50067 <nil> <nil>}
	I0217 13:18:12.748762 2295157 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-684625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-684625/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-684625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0217 13:18:12.890357 2295157 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0217 13:18:12.890386 2295157 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20427-2080001/.minikube CaCertPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20427-2080001/.minikube}
	I0217 13:18:12.890469 2295157 ubuntu.go:177] setting up certificates
	I0217 13:18:12.890490 2295157 provision.go:84] configureAuth start
	I0217 13:18:12.890558 2295157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-684625
	I0217 13:18:12.941443 2295157 provision.go:143] copyHostCerts
	I0217 13:18:12.941511 2295157 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.pem, removing ...
	I0217 13:18:12.941532 2295157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.pem
	I0217 13:18:12.941612 2295157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.pem (1082 bytes)
	I0217 13:18:12.941769 2295157 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-2080001/.minikube/cert.pem, removing ...
	I0217 13:18:12.941781 2295157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-2080001/.minikube/cert.pem
	I0217 13:18:12.941815 2295157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20427-2080001/.minikube/cert.pem (1123 bytes)
	I0217 13:18:12.942055 2295157 exec_runner.go:144] found /home/jenkins/minikube-integration/20427-2080001/.minikube/key.pem, removing ...
	I0217 13:18:12.942064 2295157 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20427-2080001/.minikube/key.pem
	I0217 13:18:12.942108 2295157 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20427-2080001/.minikube/key.pem (1675 bytes)
	I0217 13:18:12.942264 2295157 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-684625 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-684625]
	I0217 13:18:14.063393 2295157 provision.go:177] copyRemoteCerts
	I0217 13:18:14.063506 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0217 13:18:14.063626 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:14.131458 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:14.275893 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0217 13:18:14.341774 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0217 13:18:14.389465 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0217 13:18:14.433626 2295157 provision.go:87] duration metric: took 1.543116912s to configureAuth
	I0217 13:18:14.433767 2295157 ubuntu.go:193] setting minikube options for container-runtime
	I0217 13:18:14.433967 2295157 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0217 13:18:14.433980 2295157 machine.go:96] duration metric: took 5.040521046s to provisionDockerMachine
	I0217 13:18:14.433988 2295157 start.go:293] postStartSetup for "old-k8s-version-684625" (driver="docker")
	I0217 13:18:14.434003 2295157 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0217 13:18:14.434051 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0217 13:18:14.434097 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:14.463063 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:14.606663 2295157 ssh_runner.go:195] Run: cat /etc/os-release
	I0217 13:18:14.616548 2295157 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0217 13:18:14.616582 2295157 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0217 13:18:14.616592 2295157 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0217 13:18:14.616599 2295157 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0217 13:18:14.616609 2295157 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-2080001/.minikube/addons for local assets ...
	I0217 13:18:14.616662 2295157 filesync.go:126] Scanning /home/jenkins/minikube-integration/20427-2080001/.minikube/files for local assets ...
	I0217 13:18:14.616739 2295157 filesync.go:149] local asset: /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem -> 20853732.pem in /etc/ssl/certs
	I0217 13:18:14.616856 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0217 13:18:14.640038 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem --> /etc/ssl/certs/20853732.pem (1708 bytes)
	I0217 13:18:14.695950 2295157 start.go:296] duration metric: took 261.946482ms for postStartSetup
	I0217 13:18:14.696053 2295157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 13:18:14.696110 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:14.735500 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:14.863200 2295157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0217 13:18:14.870327 2295157 fix.go:56] duration metric: took 5.965462888s for fixHost
	I0217 13:18:14.870350 2295157 start.go:83] releasing machines lock for "old-k8s-version-684625", held for 5.965514629s
	I0217 13:18:14.870416 2295157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-684625
	I0217 13:18:14.894724 2295157 ssh_runner.go:195] Run: cat /version.json
	I0217 13:18:14.894747 2295157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0217 13:18:14.894776 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:14.894801 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:14.921574 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:14.936489 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:15.034814 2295157 ssh_runner.go:195] Run: systemctl --version
	I0217 13:18:15.211070 2295157 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0217 13:18:15.216305 2295157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0217 13:18:15.235746 2295157 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0217 13:18:15.235824 2295157 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0217 13:18:15.245835 2295157 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0217 13:18:15.245859 2295157 start.go:495] detecting cgroup driver to use...
	I0217 13:18:15.245918 2295157 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0217 13:18:15.246007 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0217 13:18:15.263261 2295157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0217 13:18:15.276831 2295157 docker.go:217] disabling cri-docker service (if available) ...
	I0217 13:18:15.276897 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0217 13:18:15.291126 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0217 13:18:15.303941 2295157 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0217 13:18:15.407168 2295157 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0217 13:18:15.515967 2295157 docker.go:233] disabling docker service ...
	I0217 13:18:15.516115 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0217 13:18:15.531249 2295157 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0217 13:18:15.544514 2295157 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0217 13:18:15.650186 2295157 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0217 13:18:15.758151 2295157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0217 13:18:15.772741 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0217 13:18:15.800272 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0217 13:18:15.825800 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0217 13:18:15.844780 2295157 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0217 13:18:15.844870 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0217 13:18:15.855643 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 13:18:15.866608 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0217 13:18:15.880396 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0217 13:18:15.889964 2295157 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0217 13:18:15.901790 2295157 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0217 13:18:15.911888 2295157 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0217 13:18:15.921263 2295157 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0217 13:18:15.930333 2295157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 13:18:16.044646 2295157 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0217 13:18:16.268026 2295157 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0217 13:18:16.268092 2295157 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0217 13:18:16.278321 2295157 start.go:563] Will wait 60s for crictl version
	I0217 13:18:16.278435 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:18:16.284084 2295157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0217 13:18:16.337433 2295157 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0217 13:18:16.337497 2295157 ssh_runner.go:195] Run: containerd --version
	I0217 13:18:16.368611 2295157 ssh_runner.go:195] Run: containerd --version
	I0217 13:18:16.409695 2295157 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
	I0217 13:18:16.412710 2295157 cli_runner.go:164] Run: docker network inspect old-k8s-version-684625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0217 13:18:16.435576 2295157 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0217 13:18:16.439455 2295157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 13:18:16.454443 2295157 kubeadm.go:883] updating cluster {Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0217 13:18:16.454574 2295157 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0217 13:18:16.454639 2295157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0217 13:18:16.505590 2295157 containerd.go:627] all images are preloaded for containerd runtime.
	I0217 13:18:16.505697 2295157 containerd.go:534] Images already preloaded, skipping extraction
	I0217 13:18:16.505809 2295157 ssh_runner.go:195] Run: sudo crictl images --output json
	I0217 13:18:16.556464 2295157 containerd.go:627] all images are preloaded for containerd runtime.
	I0217 13:18:16.556483 2295157 cache_images.go:84] Images are preloaded, skipping loading
	I0217 13:18:16.556491 2295157 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0217 13:18:16.556593 2295157 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-684625 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0217 13:18:16.556647 2295157 ssh_runner.go:195] Run: sudo crictl info
	I0217 13:18:16.623899 2295157 cni.go:84] Creating CNI manager for ""
	I0217 13:18:16.623979 2295157 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0217 13:18:16.624005 2295157 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0217 13:18:16.624059 2295157 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-684625 NodeName:old-k8s-version-684625 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0217 13:18:16.624232 2295157 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-684625"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0217 13:18:16.624337 2295157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0217 13:18:16.634753 2295157 binaries.go:44] Found k8s binaries, skipping transfer
	I0217 13:18:16.634929 2295157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0217 13:18:16.644923 2295157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0217 13:18:16.666479 2295157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0217 13:18:16.687885 2295157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0217 13:18:16.712524 2295157 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0217 13:18:16.716303 2295157 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0217 13:18:16.728455 2295157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 13:18:16.830380 2295157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 13:18:16.845352 2295157 certs.go:68] Setting up /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625 for IP: 192.168.85.2
	I0217 13:18:16.845369 2295157 certs.go:194] generating shared ca certs ...
	I0217 13:18:16.845385 2295157 certs.go:226] acquiring lock for ca certs: {Name:mk1e57d70f14134ded87b3cd6dacdce4d25ab3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 13:18:16.845533 2295157 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.key
	I0217 13:18:16.845584 2295157 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/proxy-client-ca.key
	I0217 13:18:16.845596 2295157 certs.go:256] generating profile certs ...
	I0217 13:18:16.845705 2295157 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.key
	I0217 13:18:16.845777 2295157 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/apiserver.key.562aa0ca
	I0217 13:18:16.845821 2295157 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/proxy-client.key
	I0217 13:18:16.845932 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/2085373.pem (1338 bytes)
	W0217 13:18:16.845967 2295157 certs.go:480] ignoring /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/2085373_empty.pem, impossibly tiny 0 bytes
	I0217 13:18:16.845980 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca-key.pem (1679 bytes)
	I0217 13:18:16.846007 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/ca.pem (1082 bytes)
	I0217 13:18:16.846034 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/cert.pem (1123 bytes)
	I0217 13:18:16.846059 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/key.pem (1675 bytes)
	I0217 13:18:16.846105 2295157 certs.go:484] found cert: /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem (1708 bytes)
	I0217 13:18:16.846751 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0217 13:18:16.882353 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0217 13:18:16.910722 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0217 13:18:16.971241 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0217 13:18:17.030660 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0217 13:18:17.089201 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0217 13:18:17.120216 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0217 13:18:17.149544 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0217 13:18:17.183779 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/certs/2085373.pem --> /usr/share/ca-certificates/2085373.pem (1338 bytes)
	I0217 13:18:17.211314 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/ssl/certs/20853732.pem --> /usr/share/ca-certificates/20853732.pem (1708 bytes)
	I0217 13:18:17.238019 2295157 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20427-2080001/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0217 13:18:17.268373 2295157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0217 13:18:17.300227 2295157 ssh_runner.go:195] Run: openssl version
	I0217 13:18:17.306629 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2085373.pem && ln -fs /usr/share/ca-certificates/2085373.pem /etc/ssl/certs/2085373.pem"
	I0217 13:18:17.318238 2295157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2085373.pem
	I0217 13:18:17.325471 2295157 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 17 12:38 /usr/share/ca-certificates/2085373.pem
	I0217 13:18:17.325615 2295157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2085373.pem
	I0217 13:18:17.337430 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2085373.pem /etc/ssl/certs/51391683.0"
	I0217 13:18:17.348920 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20853732.pem && ln -fs /usr/share/ca-certificates/20853732.pem /etc/ssl/certs/20853732.pem"
	I0217 13:18:17.359292 2295157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20853732.pem
	I0217 13:18:17.363434 2295157 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 17 12:38 /usr/share/ca-certificates/20853732.pem
	I0217 13:18:17.363575 2295157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20853732.pem
	I0217 13:18:17.371035 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20853732.pem /etc/ssl/certs/3ec20f2e.0"
	I0217 13:18:17.380012 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0217 13:18:17.389703 2295157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0217 13:18:17.393538 2295157 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 17 12:32 /usr/share/ca-certificates/minikubeCA.pem
	I0217 13:18:17.393681 2295157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0217 13:18:17.401163 2295157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0217 13:18:17.410115 2295157 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0217 13:18:17.413935 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0217 13:18:17.420815 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0217 13:18:17.427883 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0217 13:18:17.435074 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0217 13:18:17.443015 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0217 13:18:17.450305 2295157 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0217 13:18:17.457621 2295157 kubeadm.go:392] StartCluster: {Name:old-k8s-version-684625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-684625 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 13:18:17.457780 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0217 13:18:17.457888 2295157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0217 13:18:17.526281 2295157 cri.go:89] found id: "d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
	I0217 13:18:17.526310 2295157 cri.go:89] found id: "bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
	I0217 13:18:17.526326 2295157 cri.go:89] found id: "e25655e00932f6940f9106254c70f637b722255928a692b65231ed7503119f81"
	I0217 13:18:17.526332 2295157 cri.go:89] found id: "b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
	I0217 13:18:17.526335 2295157 cri.go:89] found id: "eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
	I0217 13:18:17.526339 2295157 cri.go:89] found id: "b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
	I0217 13:18:17.526342 2295157 cri.go:89] found id: "6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
	I0217 13:18:17.526345 2295157 cri.go:89] found id: "50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
	I0217 13:18:17.526352 2295157 cri.go:89] found id: ""
	I0217 13:18:17.526428 2295157 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0217 13:18:17.547002 2295157 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-02-17T13:18:17Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0217 13:18:17.547190 2295157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0217 13:18:17.559119 2295157 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0217 13:18:17.559204 2295157 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0217 13:18:17.559340 2295157 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0217 13:18:17.571446 2295157 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0217 13:18:17.572148 2295157 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-684625" does not appear in /home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 13:18:17.572398 2295157 kubeconfig.go:62] /home/jenkins/minikube-integration/20427-2080001/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-684625" cluster setting kubeconfig missing "old-k8s-version-684625" context setting]
	I0217 13:18:17.572884 2295157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-2080001/kubeconfig: {Name:mk44077e5743bb96254549e3eaf259b0845749a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 13:18:17.575010 2295157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0217 13:18:17.587613 2295157 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0217 13:18:17.587716 2295157 kubeadm.go:597] duration metric: took 28.474835ms to restartPrimaryControlPlane
	I0217 13:18:17.587750 2295157 kubeadm.go:394] duration metric: took 130.139463ms to StartCluster
	I0217 13:18:17.587814 2295157 settings.go:142] acquiring lock: {Name:mk54d8990a2b55fcc4b6e61aceb051d4e6e4e25d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 13:18:17.587934 2295157 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 13:18:17.588794 2295157 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-2080001/kubeconfig: {Name:mk44077e5743bb96254549e3eaf259b0845749a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 13:18:17.589153 2295157 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0217 13:18:17.589723 2295157 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0217 13:18:17.589837 2295157 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-684625"
	I0217 13:18:17.589861 2295157 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-684625"
	W0217 13:18:17.589872 2295157 addons.go:247] addon storage-provisioner should already be in state true
	I0217 13:18:17.589902 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
	I0217 13:18:17.590522 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:17.590914 2295157 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0217 13:18:17.591063 2295157 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-684625"
	I0217 13:18:17.591129 2295157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-684625"
	I0217 13:18:17.591531 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:17.594337 2295157 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-684625"
	I0217 13:18:17.594689 2295157 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-684625"
	W0217 13:18:17.594731 2295157 addons.go:247] addon metrics-server should already be in state true
	I0217 13:18:17.594816 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
	I0217 13:18:17.594445 2295157 out.go:177] * Verifying Kubernetes components...
	I0217 13:18:17.594542 2295157 addons.go:69] Setting dashboard=true in profile "old-k8s-version-684625"
	I0217 13:18:17.595840 2295157 addons.go:238] Setting addon dashboard=true in "old-k8s-version-684625"
	W0217 13:18:17.597251 2295157 addons.go:247] addon dashboard should already be in state true
	I0217 13:18:17.597431 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
	I0217 13:18:17.597238 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:17.599756 2295157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0217 13:18:17.606987 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:17.645729 2295157 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-684625"
	W0217 13:18:17.645756 2295157 addons.go:247] addon default-storageclass should already be in state true
	I0217 13:18:17.645798 2295157 host.go:66] Checking if "old-k8s-version-684625" exists ...
	I0217 13:18:17.646354 2295157 cli_runner.go:164] Run: docker container inspect old-k8s-version-684625 --format={{.State.Status}}
	I0217 13:18:17.688895 2295157 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0217 13:18:17.692040 2295157 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:17.692067 2295157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0217 13:18:17.692142 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:17.717110 2295157 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0217 13:18:17.725751 2295157 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0217 13:18:17.725988 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0217 13:18:17.726004 2295157 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0217 13:18:17.726117 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:17.732230 2295157 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0217 13:18:17.737832 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0217 13:18:17.737869 2295157 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0217 13:18:17.737973 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:17.744324 2295157 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0217 13:18:17.744357 2295157 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0217 13:18:17.744443 2295157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-684625
	I0217 13:18:17.770842 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:17.815685 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:17.818938 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:17.821365 2295157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50067 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/old-k8s-version-684625/id_rsa Username:docker}
	I0217 13:18:17.899081 2295157 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0217 13:18:17.954967 2295157 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-684625" to be "Ready" ...
	I0217 13:18:18.037410 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:18.040878 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0217 13:18:18.040967 2295157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0217 13:18:18.145087 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0217 13:18:18.145172 2295157 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0217 13:18:18.150656 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0217 13:18:18.150737 2295157 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0217 13:18:18.173392 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0217 13:18:18.218151 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0217 13:18:18.218235 2295157 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0217 13:18:18.230955 2295157 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 13:18:18.231036 2295157 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0217 13:18:18.275081 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0217 13:18:18.275171 2295157 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0217 13:18:18.304123 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 13:18:18.363190 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0217 13:18:18.363263 2295157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0217 13:18:18.401685 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.401758 2295157 retry.go:31] will retry after 150.911569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:18.430068 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.430186 2295157 retry.go:31] will retry after 258.316003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.451499 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0217 13:18:18.451577 2295157 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0217 13:18:18.500481 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0217 13:18:18.500508 2295157 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0217 13:18:18.545345 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0217 13:18:18.545385 2295157 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0217 13:18:18.553801 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:18.569400 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0217 13:18:18.569439 2295157 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0217 13:18:18.585995 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.586038 2295157 retry.go:31] will retry after 292.791338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.628791 2295157 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:18.628821 2295157 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0217 13:18:18.689267 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0217 13:18:18.697370 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.697404 2295157 retry.go:31] will retry after 276.99122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.702414 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:18.879854 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0217 13:18:18.909299 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.909336 2295157 retry.go:31] will retry after 514.923774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:18.914967 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.914997 2295157 retry.go:31] will retry after 363.856496ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:18.974907 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0217 13:18:19.052531 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.052645 2295157 retry.go:31] will retry after 511.942409ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:19.169429 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.169518 2295157 retry.go:31] will retry after 358.176208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.279866 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0217 13:18:19.404529 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.404647 2295157 retry.go:31] will retry after 555.062538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.425007 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0217 13:18:19.526129 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.526223 2295157 retry.go:31] will retry after 419.422751ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.528273 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:19.565684 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0217 13:18:19.645935 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.646034 2295157 retry.go:31] will retry after 930.325939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:19.721364 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.721456 2295157 retry.go:31] will retry after 793.821457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:19.945901 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0217 13:18:19.955662 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
	I0217 13:18:19.960017 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0217 13:18:20.068986 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.069069 2295157 retry.go:31] will retry after 765.192301ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:20.149813 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.149912 2295157 retry.go:31] will retry after 590.508182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.515911 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 13:18:20.577354 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0217 13:18:20.618530 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.618610 2295157 retry.go:31] will retry after 1.061356629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:20.701856 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.701936 2295157 retry.go:31] will retry after 1.071140655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.741110 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:20.834645 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0217 13:18:20.847981 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.848060 2295157 retry.go:31] will retry after 951.859228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:20.940956 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:20.941033 2295157 retry.go:31] will retry after 956.309552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:21.680173 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0217 13:18:21.767606 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:21.767634 2295157 retry.go:31] will retry after 1.482756733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:21.773857 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:21.800073 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:21.897517 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0217 13:18:21.923955 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:21.923983 2295157 retry.go:31] will retry after 2.795962653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:22.036271 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:22.036392 2295157 retry.go:31] will retry after 1.118084246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:22.065813 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:22.065843 2295157 retry.go:31] will retry after 1.662572099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:22.455613 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
	I0217 13:18:23.154952 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:23.250866 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0217 13:18:23.254080 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:23.254113 2295157 retry.go:31] will retry after 1.864580743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0217 13:18:23.343309 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:23.343346 2295157 retry.go:31] will retry after 2.811155514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:23.729488 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0217 13:18:23.834695 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:23.834722 2295157 retry.go:31] will retry after 1.967376353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:24.456374 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
	I0217 13:18:24.720931 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0217 13:18:24.821559 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:24.821610 2295157 retry.go:31] will retry after 2.740959084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:25.119782 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0217 13:18:25.218887 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:25.218922 2295157 retry.go:31] will retry after 2.866268131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:25.802895 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0217 13:18:25.890827 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:25.890859 2295157 retry.go:31] will retry after 3.490245305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:26.154712 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0217 13:18:26.354138 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:26.354170 2295157 retry.go:31] will retry after 2.171663456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0217 13:18:26.955587 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": dial tcp 192.168.85.2:8443: connect: connection refused
	I0217 13:18:27.563292 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:28.085390 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:28.526026 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 13:18:29.381908 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0217 13:18:37.458144 2295157 node_ready.go:53] error getting node "old-k8s-version-684625": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-684625": net/http: TLS handshake timeout
	I0217 13:18:37.903327 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.339994488s)
	W0217 13:18:37.903357 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0217 13:18:37.903373 2295157 retry.go:31] will retry after 4.908317628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0217 13:18:38.435344 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.349904947s)
	W0217 13:18:38.435391 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0217 13:18:38.435407 2295157 retry.go:31] will retry after 5.696676717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0217 13:18:38.838321 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.312249521s)
	W0217 13:18:38.838353 2295157 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0217 13:18:38.838370 2295157 retry.go:31] will retry after 4.583534276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0217 13:18:39.353715 2295157 node_ready.go:49] node "old-k8s-version-684625" has status "Ready":"True"
	I0217 13:18:39.353743 2295157 node_ready.go:38] duration metric: took 21.398644051s for node "old-k8s-version-684625" to be "Ready" ...
	I0217 13:18:39.353765 2295157 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0217 13:18:39.522946 2295157 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-hbrnk" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.658180 2295157 pod_ready.go:93] pod "coredns-74ff55c5b-hbrnk" in "kube-system" namespace has status "Ready":"True"
	I0217 13:18:39.658201 2295157 pod_ready.go:82] duration metric: took 135.214164ms for pod "coredns-74ff55c5b-hbrnk" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.658213 2295157 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.711948 2295157 pod_ready.go:93] pod "etcd-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
	I0217 13:18:39.711994 2295157 pod_ready.go:82] duration metric: took 53.771916ms for pod "etcd-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.712016 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.756772 2295157 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
	I0217 13:18:39.756803 2295157 pod_ready.go:82] duration metric: took 44.779045ms for pod "kube-apiserver-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.756816 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.766763 2295157 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
	I0217 13:18:39.766805 2295157 pod_ready.go:82] duration metric: took 9.98055ms for pod "kube-controller-manager-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.766818 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xhtkg" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.829306 2295157 pod_ready.go:93] pod "kube-proxy-xhtkg" in "kube-system" namespace has status "Ready":"True"
	I0217 13:18:39.829329 2295157 pod_ready.go:82] duration metric: took 62.503567ms for pod "kube-proxy-xhtkg" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:39.829341 2295157 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:18:40.097818 2295157 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.715865457s)
	I0217 13:18:41.834948 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:42.812211 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0217 13:18:43.423049 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0217 13:18:43.724574 2295157 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-684625"
	I0217 13:18:43.835761 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:44.133000 2295157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0217 13:18:44.605574 2295157 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-684625 addons enable metrics-server
	
	I0217 13:18:44.608538 2295157 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0217 13:18:44.611712 2295157 addons.go:514] duration metric: took 27.021997868s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0217 13:18:46.336658 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:48.834230 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:50.834780 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:53.334677 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:55.835743 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:18:58.348472 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:00.838188 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:02.845514 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:05.335267 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:07.335374 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:09.371597 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:11.834925 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:13.846606 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:16.334805 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:18.835176 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:21.334574 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:23.835435 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:25.835656 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:28.334466 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:30.335742 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:32.337189 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:34.343677 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:36.838662 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:39.336406 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:41.838833 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:44.336721 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:46.835146 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:48.835441 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:50.836645 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:53.335066 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:55.839913 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:58.335313 2295157 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"False"
	I0217 13:19:59.841082 2295157 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace has status "Ready":"True"
	I0217 13:19:59.841108 2295157 pod_ready.go:82] duration metric: took 1m20.011758848s for pod "kube-scheduler-old-k8s-version-684625" in "kube-system" namespace to be "Ready" ...
	I0217 13:19:59.841120 2295157 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace to be "Ready" ...
	I0217 13:20:01.846998 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:04.347209 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:06.847289 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:08.847346 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:11.346494 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:13.846673 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:16.347207 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:18.847426 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:21.346549 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:23.346826 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:25.352377 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:27.847604 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:30.346817 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:32.846613 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:34.847062 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:37.347523 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:39.846307 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:41.852050 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:44.346818 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:46.846689 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:48.847082 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:50.847364 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:53.346723 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:55.346762 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:57.846811 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:20:59.846948 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:02.346604 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:04.847068 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:07.346917 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:09.349083 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:11.846598 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:14.347656 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:16.846837 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:19.346455 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:21.347583 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:23.845798 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:25.846840 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:27.846999 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:30.346256 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:32.846763 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:35.346628 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:37.347461 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:39.851961 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:42.347939 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:44.846311 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:46.846946 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:49.347039 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:51.847466 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:54.347444 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:56.847348 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:21:59.346259 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:01.346444 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:03.348141 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:05.847142 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:08.347497 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:10.846362 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:13.346497 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:15.846629 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:17.846959 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:20.346416 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:22.347252 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:24.847048 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:27.346827 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:29.847009 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:32.349273 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:34.847769 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:37.346166 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:39.346814 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:41.347109 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:43.846441 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:45.846759 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:48.346492 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:50.346738 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:52.347184 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:54.847427 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:57.346427 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:22:59.846790 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:01.846951 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:03.847085 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:06.346078 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:08.347146 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:10.846304 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:13.346337 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:15.353730 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:17.846539 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:20.346840 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:22.347359 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:24.846220 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:26.846316 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:28.850223 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:31.346515 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:33.847115 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:36.346645 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:38.346974 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:40.846807 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:42.847299 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:44.848827 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:47.347148 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:49.349279 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:51.847190 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:53.847663 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:55.849015 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:58.346357 2295157 pod_ready.go:103] pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace has status "Ready":"False"
	I0217 13:23:59.847119 2295157 pod_ready.go:82] duration metric: took 4m0.005984559s for pod "metrics-server-9975d5f86-bj72q" in "kube-system" namespace to be "Ready" ...
	E0217 13:23:59.847208 2295157 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0217 13:23:59.847225 2295157 pod_ready.go:39] duration metric: took 5m20.493442033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0217 13:23:59.847242 2295157 api_server.go:52] waiting for apiserver process to appear ...
	I0217 13:23:59.847282 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0217 13:23:59.847357 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0217 13:23:59.885005 2295157 cri.go:89] found id: "1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
	I0217 13:23:59.885030 2295157 cri.go:89] found id: "b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
	I0217 13:23:59.885035 2295157 cri.go:89] found id: ""
	I0217 13:23:59.885042 2295157 logs.go:282] 2 containers: [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9]
	I0217 13:23:59.885102 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:23:59.888705 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:23:59.892160 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0217 13:23:59.892235 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0217 13:23:59.938246 2295157 cri.go:89] found id: "8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
	I0217 13:23:59.938267 2295157 cri.go:89] found id: "6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
	I0217 13:23:59.938272 2295157 cri.go:89] found id: ""
	I0217 13:23:59.938279 2295157 logs.go:282] 2 containers: [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3]
	I0217 13:23:59.938339 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:23:59.941943 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:23:59.945478 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0217 13:23:59.945570 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0217 13:23:59.985037 2295157 cri.go:89] found id: "7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
	I0217 13:23:59.985058 2295157 cri.go:89] found id: "d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
	I0217 13:23:59.985063 2295157 cri.go:89] found id: ""
	I0217 13:23:59.985070 2295157 logs.go:282] 2 containers: [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994]
	I0217 13:23:59.985126 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:23:59.988758 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:23:59.992103 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0217 13:23:59.992195 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0217 13:24:00.112402 2295157 cri.go:89] found id: "4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
	I0217 13:24:00.112852 2295157 cri.go:89] found id: "50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
	I0217 13:24:00.112864 2295157 cri.go:89] found id: ""
	I0217 13:24:00.112873 2295157 logs.go:282] 2 containers: [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c]
	I0217 13:24:00.112963 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.122975 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.131554 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0217 13:24:00.131656 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0217 13:24:00.250520 2295157 cri.go:89] found id: "8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
	I0217 13:24:00.250606 2295157 cri.go:89] found id: "b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
	I0217 13:24:00.250629 2295157 cri.go:89] found id: ""
	I0217 13:24:00.250656 2295157 logs.go:282] 2 containers: [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5]
	I0217 13:24:00.250771 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.260928 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.271266 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0217 13:24:00.271517 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0217 13:24:00.341287 2295157 cri.go:89] found id: "153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
	I0217 13:24:00.341369 2295157 cri.go:89] found id: "eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
	I0217 13:24:00.341391 2295157 cri.go:89] found id: ""
	I0217 13:24:00.341418 2295157 logs.go:282] 2 containers: [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9]
	I0217 13:24:00.341502 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.346938 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.351739 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0217 13:24:00.351871 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0217 13:24:00.403565 2295157 cri.go:89] found id: "1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
	I0217 13:24:00.403643 2295157 cri.go:89] found id: "bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
	I0217 13:24:00.403664 2295157 cri.go:89] found id: ""
	I0217 13:24:00.403690 2295157 logs.go:282] 2 containers: [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767]
	I0217 13:24:00.403769 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.408046 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.412192 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0217 13:24:00.412306 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0217 13:24:00.457827 2295157 cri.go:89] found id: "21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
	I0217 13:24:00.457852 2295157 cri.go:89] found id: ""
	I0217 13:24:00.457862 2295157 logs.go:282] 1 containers: [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42]
	I0217 13:24:00.457930 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.462436 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0217 13:24:00.462599 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0217 13:24:00.509072 2295157 cri.go:89] found id: "9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
	I0217 13:24:00.509150 2295157 cri.go:89] found id: "758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
	I0217 13:24:00.509170 2295157 cri.go:89] found id: ""
	I0217 13:24:00.509193 2295157 logs.go:282] 2 containers: [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f]
	I0217 13:24:00.509281 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.513343 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:00.517930 2295157 logs.go:123] Gathering logs for kindnet [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d] ...
	I0217 13:24:00.517967 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
	I0217 13:24:00.564871 2295157 logs.go:123] Gathering logs for container status ...
	I0217 13:24:00.564904 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0217 13:24:00.632262 2295157 logs.go:123] Gathering logs for kubelet ...
	I0217 13:24:00.632290 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0217 13:24:00.691097 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.309993     661 reflector.go:138] object-"kube-system"/"kindnet-token-vfbnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vfbnq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.691379 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.310267     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zqt6v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zqt6v" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.691594 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.311034     661 reflector.go:138] object-"default"/"default-token-jrqqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-jrqqq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.691801 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.315771     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.692019 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.316033     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-ghwn6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-ghwn6" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.692240 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.317925     661 reflector.go:138] object-"kube-system"/"metrics-server-token-bpn96": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bpn96" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.692451 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.318273     661 reflector.go:138] object-"kube-system"/"coredns-token-f6dfc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f6dfc" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.692650 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.319367     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:00.701972 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.094609     661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.704829 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.610279     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:00.706764 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.776479     661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.706960 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.798146     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.707899 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.848610     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.708717 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.918969     661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.710961 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.210034     661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.712739 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.797379     661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.713797 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.800620     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.714764 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.803924     661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.716352 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.302571     661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.717881 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.837069     661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:00.721039 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:54 old-k8s-version-684625 kubelet[661]: E0217 13:18:54.524274     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:00.723622 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:05 old-k8s-version-684625 kubelet[661]: E0217 13:19:05.936388     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.724082 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:06 old-k8s-version-684625 kubelet[661]: E0217 13:19:06.947715     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.724270 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:08 old-k8s-version-684625 kubelet[661]: E0217 13:19:08.506836     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.724621 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:15 old-k8s-version-684625 kubelet[661]: E0217 13:19:15.014578     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.727386 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:20 old-k8s-version-684625 kubelet[661]: E0217 13:19:20.517133     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:00.727832 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:25 old-k8s-version-684625 kubelet[661]: E0217 13:19:25.001801     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
	W0217 13:24:00.728436 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:28 old-k8s-version-684625 kubelet[661]: E0217 13:19:28.017166     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.728627 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:31 old-k8s-version-684625 kubelet[661]: E0217 13:19:31.511330     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.728957 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:35 old-k8s-version-684625 kubelet[661]: E0217 13:19:35.014933     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.729265 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:38 old-k8s-version-684625 kubelet[661]: E0217 13:19:38.507288     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
	W0217 13:24:00.729453 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:43 old-k8s-version-684625 kubelet[661]: E0217 13:19:43.507026     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.730055 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:49 old-k8s-version-684625 kubelet[661]: E0217 13:19:49.129541     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.730372 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:52 old-k8s-version-684625 kubelet[661]: E0217 13:19:52.506152     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
	W0217 13:24:00.730697 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:55 old-k8s-version-684625 kubelet[661]: E0217 13:19:55.015087     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.730881 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:58 old-k8s-version-684625 kubelet[661]: E0217 13:19:58.506545     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.731335 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:07 old-k8s-version-684625 kubelet[661]: E0217 13:20:07.506178     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.733759 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:13 old-k8s-version-684625 kubelet[661]: E0217 13:20:13.517749     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:00.734087 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:20 old-k8s-version-684625 kubelet[661]: E0217 13:20:20.506171     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.734273 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:24 old-k8s-version-684625 kubelet[661]: E0217 13:20:24.512163     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.734861 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:32 old-k8s-version-684625 kubelet[661]: E0217 13:20:32.259438     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.735185 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:35 old-k8s-version-684625 kubelet[661]: E0217 13:20:35.014985     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.735368 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:39 old-k8s-version-684625 kubelet[661]: E0217 13:20:39.507286     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.735694 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:50 old-k8s-version-684625 kubelet[661]: E0217 13:20:50.506239     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.735879 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:51 old-k8s-version-684625 kubelet[661]: E0217 13:20:51.506540     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.736217 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:01 old-k8s-version-684625 kubelet[661]: E0217 13:21:01.506830     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.736403 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:06 old-k8s-version-684625 kubelet[661]: E0217 13:21:06.508486     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.736727 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:14 old-k8s-version-684625 kubelet[661]: E0217 13:21:14.506262     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.736911 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:20 old-k8s-version-684625 kubelet[661]: E0217 13:21:20.506827     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.737366 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:29 old-k8s-version-684625 kubelet[661]: E0217 13:21:29.507037     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.737569 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:31 old-k8s-version-684625 kubelet[661]: E0217 13:21:31.506506     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.737915 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:40 old-k8s-version-684625 kubelet[661]: E0217 13:21:40.506099     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.740368 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:46 old-k8s-version-684625 kubelet[661]: E0217 13:21:46.514608     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:00.740958 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:53 old-k8s-version-684625 kubelet[661]: E0217 13:21:53.486312     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.741283 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:55 old-k8s-version-684625 kubelet[661]: E0217 13:21:55.014989     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.741467 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:57 old-k8s-version-684625 kubelet[661]: E0217 13:21:57.513924     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.741800 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:05 old-k8s-version-684625 kubelet[661]: E0217 13:22:05.506670     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.741987 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:10 old-k8s-version-684625 kubelet[661]: E0217 13:22:10.506686     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.742316 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:16 old-k8s-version-684625 kubelet[661]: E0217 13:22:16.506151     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.742501 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:21 old-k8s-version-684625 kubelet[661]: E0217 13:22:21.506684     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.742853 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:27 old-k8s-version-684625 kubelet[661]: E0217 13:22:27.511459     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.743038 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:33 old-k8s-version-684625 kubelet[661]: E0217 13:22:33.506664     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.743365 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:40 old-k8s-version-684625 kubelet[661]: E0217 13:22:40.506352     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.743548 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:48 old-k8s-version-684625 kubelet[661]: E0217 13:22:48.506643     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.743876 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: E0217 13:22:52.506276     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.744059 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:02 old-k8s-version-684625 kubelet[661]: E0217 13:23:02.506740     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.744385 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: E0217 13:23:03.506425     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.744713 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511407     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.744898 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511756     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.745083 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:27 old-k8s-version-684625 kubelet[661]: E0217 13:23:27.506707     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.745407 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.745594 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.745930 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:00.746116 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:00.746443 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	I0217 13:24:00.746457 2295157 logs.go:123] Gathering logs for kube-apiserver [b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9] ...
	I0217 13:24:00.746474 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
	I0217 13:24:00.807721 2295157 logs.go:123] Gathering logs for coredns [d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994] ...
	I0217 13:24:00.807772 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
	I0217 13:24:00.858844 2295157 logs.go:123] Gathering logs for kube-scheduler [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c] ...
	I0217 13:24:00.858931 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
	I0217 13:24:00.903061 2295157 logs.go:123] Gathering logs for kube-proxy [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e] ...
	I0217 13:24:00.903090 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
	I0217 13:24:00.944378 2295157 logs.go:123] Gathering logs for kube-controller-manager [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044] ...
	I0217 13:24:00.944403 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
	I0217 13:24:01.003006 2295157 logs.go:123] Gathering logs for etcd [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b] ...
	I0217 13:24:01.003040 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
	I0217 13:24:01.046449 2295157 logs.go:123] Gathering logs for coredns [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790] ...
	I0217 13:24:01.046545 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
	I0217 13:24:01.096817 2295157 logs.go:123] Gathering logs for kube-controller-manager [eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9] ...
	I0217 13:24:01.096849 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
	I0217 13:24:01.179600 2295157 logs.go:123] Gathering logs for kubernetes-dashboard [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42] ...
	I0217 13:24:01.179656 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
	I0217 13:24:01.231305 2295157 logs.go:123] Gathering logs for describe nodes ...
	I0217 13:24:01.231336 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0217 13:24:01.424769 2295157 logs.go:123] Gathering logs for kube-apiserver [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d] ...
	I0217 13:24:01.424811 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
	I0217 13:24:01.498259 2295157 logs.go:123] Gathering logs for etcd [6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3] ...
	I0217 13:24:01.498310 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
	I0217 13:24:01.570184 2295157 logs.go:123] Gathering logs for kube-scheduler [50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c] ...
	I0217 13:24:01.570281 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
	I0217 13:24:01.637074 2295157 logs.go:123] Gathering logs for storage-provisioner [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6] ...
	I0217 13:24:01.637174 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
	I0217 13:24:01.734430 2295157 logs.go:123] Gathering logs for storage-provisioner [758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f] ...
	I0217 13:24:01.734458 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
	I0217 13:24:01.805516 2295157 logs.go:123] Gathering logs for dmesg ...
	I0217 13:24:01.805548 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0217 13:24:01.845813 2295157 logs.go:123] Gathering logs for kube-proxy [b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5] ...
	I0217 13:24:01.845842 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
	I0217 13:24:01.905384 2295157 logs.go:123] Gathering logs for kindnet [bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767] ...
	I0217 13:24:01.905415 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
	I0217 13:24:01.948076 2295157 logs.go:123] Gathering logs for containerd ...
	I0217 13:24:01.948167 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0217 13:24:02.157193 2295157 out.go:358] Setting ErrFile to fd 2...
	I0217 13:24:02.157231 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0217 13:24:02.157304 2295157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0217 13:24:02.157320 2295157 out.go:270]   Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	  Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:02.157331 2295157 out.go:270]   Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:02.157344 2295157 out.go:270]   Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	  Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:02.157467 2295157 out.go:270]   Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:02.157482 2295157 out.go:270]   Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	  Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	I0217 13:24:02.157497 2295157 out.go:358] Setting ErrFile to fd 2...
	I0217 13:24:02.157507 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:24:12.159325 2295157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 13:24:12.172465 2295157 api_server.go:72] duration metric: took 5m54.583233373s to wait for apiserver process to appear ...
	I0217 13:24:12.172490 2295157 api_server.go:88] waiting for apiserver healthz status ...
	I0217 13:24:12.172527 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0217 13:24:12.172585 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0217 13:24:12.247723 2295157 cri.go:89] found id: "1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
	I0217 13:24:12.247746 2295157 cri.go:89] found id: "b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
	I0217 13:24:12.247750 2295157 cri.go:89] found id: ""
	I0217 13:24:12.247758 2295157 logs.go:282] 2 containers: [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9]
	I0217 13:24:12.247819 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.252444 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.256676 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0217 13:24:12.256749 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0217 13:24:12.301508 2295157 cri.go:89] found id: "8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
	I0217 13:24:12.301534 2295157 cri.go:89] found id: "6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
	I0217 13:24:12.301539 2295157 cri.go:89] found id: ""
	I0217 13:24:12.301546 2295157 logs.go:282] 2 containers: [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3]
	I0217 13:24:12.301601 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.305554 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.310251 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0217 13:24:12.310320 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0217 13:24:12.355504 2295157 cri.go:89] found id: "7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
	I0217 13:24:12.355527 2295157 cri.go:89] found id: "d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
	I0217 13:24:12.355532 2295157 cri.go:89] found id: ""
	I0217 13:24:12.355539 2295157 logs.go:282] 2 containers: [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994]
	I0217 13:24:12.355609 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.359467 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.363091 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0217 13:24:12.363200 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0217 13:24:12.414645 2295157 cri.go:89] found id: "4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
	I0217 13:24:12.414718 2295157 cri.go:89] found id: "50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
	I0217 13:24:12.414745 2295157 cri.go:89] found id: ""
	I0217 13:24:12.414754 2295157 logs.go:282] 2 containers: [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c]
	I0217 13:24:12.414854 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.418862 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.422776 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0217 13:24:12.422860 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0217 13:24:12.465716 2295157 cri.go:89] found id: "8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
	I0217 13:24:12.465737 2295157 cri.go:89] found id: "b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
	I0217 13:24:12.465741 2295157 cri.go:89] found id: ""
	I0217 13:24:12.465749 2295157 logs.go:282] 2 containers: [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5]
	I0217 13:24:12.465806 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.469723 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.473168 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0217 13:24:12.473244 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0217 13:24:12.512831 2295157 cri.go:89] found id: "153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
	I0217 13:24:12.512852 2295157 cri.go:89] found id: "eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
	I0217 13:24:12.512857 2295157 cri.go:89] found id: ""
	I0217 13:24:12.512864 2295157 logs.go:282] 2 containers: [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9]
	I0217 13:24:12.512925 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.516740 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.520320 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0217 13:24:12.520403 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0217 13:24:12.567350 2295157 cri.go:89] found id: "1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
	I0217 13:24:12.567371 2295157 cri.go:89] found id: "bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
	I0217 13:24:12.567376 2295157 cri.go:89] found id: ""
	I0217 13:24:12.567383 2295157 logs.go:282] 2 containers: [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767]
	I0217 13:24:12.567481 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.571188 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.574684 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0217 13:24:12.574757 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0217 13:24:12.613877 2295157 cri.go:89] found id: "9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
	I0217 13:24:12.613910 2295157 cri.go:89] found id: "758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
	I0217 13:24:12.613916 2295157 cri.go:89] found id: ""
	I0217 13:24:12.613923 2295157 logs.go:282] 2 containers: [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f]
	I0217 13:24:12.613996 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.617770 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.621355 2295157 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0217 13:24:12.621433 2295157 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0217 13:24:12.662920 2295157 cri.go:89] found id: "21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
	I0217 13:24:12.662982 2295157 cri.go:89] found id: ""
	I0217 13:24:12.662995 2295157 logs.go:282] 1 containers: [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42]
	I0217 13:24:12.663071 2295157 ssh_runner.go:195] Run: which crictl
	I0217 13:24:12.666972 2295157 logs.go:123] Gathering logs for storage-provisioner [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6] ...
	I0217 13:24:12.667000 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6"
	I0217 13:24:12.707293 2295157 logs.go:123] Gathering logs for storage-provisioner [758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f] ...
	I0217 13:24:12.707321 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f"
	I0217 13:24:12.745206 2295157 logs.go:123] Gathering logs for kubelet ...
	I0217 13:24:12.745237 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0217 13:24:12.795062 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.309993     661 reflector.go:138] object-"kube-system"/"kindnet-token-vfbnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vfbnq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.795343 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.310267     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-zqt6v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-zqt6v" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.795566 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.311034     661 reflector.go:138] object-"default"/"default-token-jrqqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-jrqqq" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.795779 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.315771     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.796024 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.316033     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-ghwn6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-ghwn6" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.796251 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.317925     661 reflector.go:138] object-"kube-system"/"metrics-server-token-bpn96": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bpn96" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.796466 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.318273     661 reflector.go:138] object-"kube-system"/"coredns-token-f6dfc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f6dfc" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.796670 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:39 old-k8s-version-684625 kubelet[661]: E0217 13:18:39.319367     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-684625" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-684625' and this object
	W0217 13:24:12.806313 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.094609     661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.808982 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.610279     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:12.810875 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.776479     661 pod_workers.go:191] Error syncing pod db74f299-b905-402c-8142-b6b360bb1ae2 ("kindnet-d7wd6_kube-system(db74f299-b905-402c-8142-b6b360bb1ae2)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.811068 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.798146     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.811995 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.848610     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.812842 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:41 old-k8s-version-684625 kubelet[661]: E0217 13:18:41.918969     661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.815104 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.210034     661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.816924 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.797379     661 pod_workers.go:191] Error syncing pod ae0c5ea7-e2c9-427f-bdf2-a284c975e898 ("coredns-74ff55c5b-hbrnk_kube-system(ae0c5ea7-e2c9-427f-bdf2-a284c975e898)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.818246 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.800620     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.819313 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:42 old-k8s-version-684625 kubelet[661]: E0217 13:18:42.803924     661 pod_workers.go:191] Error syncing pod bff5b8f0-6b85-450b-804b-24e5e32c97ba ("busybox_default(bff5b8f0-6b85-450b-804b-24e5e32c97ba)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.820957 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.302571     661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.822498 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:43 old-k8s-version-684625 kubelet[661]: E0217 13:18:43.837069     661 pod_workers.go:191] Error syncing pod ffc7d812-df4e-4e5f-8523-a049282e4e8a ("kube-proxy-xhtkg_kube-system(ffc7d812-df4e-4e5f-8523-a049282e4e8a)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0217 13:24:12.825682 2295157 logs.go:138] Found kubelet problem: Feb 17 13:18:54 old-k8s-version-684625 kubelet[661]: E0217 13:18:54.524274     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:12.828225 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:05 old-k8s-version-684625 kubelet[661]: E0217 13:19:05.936388     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.828688 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:06 old-k8s-version-684625 kubelet[661]: E0217 13:19:06.947715     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.828875 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:08 old-k8s-version-684625 kubelet[661]: E0217 13:19:08.506836     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.829207 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:15 old-k8s-version-684625 kubelet[661]: E0217 13:19:15.014578     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.832004 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:20 old-k8s-version-684625 kubelet[661]: E0217 13:19:20.517133     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:12.832446 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:25 old-k8s-version-684625 kubelet[661]: E0217 13:19:25.001801     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
	W0217 13:24:12.833038 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:28 old-k8s-version-684625 kubelet[661]: E0217 13:19:28.017166     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.833223 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:31 old-k8s-version-684625 kubelet[661]: E0217 13:19:31.511330     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.833553 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:35 old-k8s-version-684625 kubelet[661]: E0217 13:19:35.014933     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.834136 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:38 old-k8s-version-684625 kubelet[661]: E0217 13:19:38.507288     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
	W0217 13:24:12.834337 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:43 old-k8s-version-684625 kubelet[661]: E0217 13:19:43.507026     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.834936 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:49 old-k8s-version-684625 kubelet[661]: E0217 13:19:49.129541     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.835255 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:52 old-k8s-version-684625 kubelet[661]: E0217 13:19:52.506152     661 pod_workers.go:191] Error syncing pod 5e9ef9f1-9c17-4d40-94db-52c48cce58e3 ("storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e9ef9f1-9c17-4d40-94db-52c48cce58e3)"
	W0217 13:24:12.835587 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:55 old-k8s-version-684625 kubelet[661]: E0217 13:19:55.015087     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.835775 2295157 logs.go:138] Found kubelet problem: Feb 17 13:19:58 old-k8s-version-684625 kubelet[661]: E0217 13:19:58.506545     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.836272 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:07 old-k8s-version-684625 kubelet[661]: E0217 13:20:07.506178     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.838878 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:13 old-k8s-version-684625 kubelet[661]: E0217 13:20:13.517749     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:12.839240 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:20 old-k8s-version-684625 kubelet[661]: E0217 13:20:20.506171     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.839448 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:24 old-k8s-version-684625 kubelet[661]: E0217 13:20:24.512163     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.840578 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:32 old-k8s-version-684625 kubelet[661]: E0217 13:20:32.259438     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.840923 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:35 old-k8s-version-684625 kubelet[661]: E0217 13:20:35.014985     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.841110 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:39 old-k8s-version-684625 kubelet[661]: E0217 13:20:39.507286     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.841447 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:50 old-k8s-version-684625 kubelet[661]: E0217 13:20:50.506239     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.841636 2295157 logs.go:138] Found kubelet problem: Feb 17 13:20:51 old-k8s-version-684625 kubelet[661]: E0217 13:20:51.506540     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.841977 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:01 old-k8s-version-684625 kubelet[661]: E0217 13:21:01.506830     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.842173 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:06 old-k8s-version-684625 kubelet[661]: E0217 13:21:06.508486     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.842503 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:14 old-k8s-version-684625 kubelet[661]: E0217 13:21:14.506262     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.842697 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:20 old-k8s-version-684625 kubelet[661]: E0217 13:21:20.506827     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.843052 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:29 old-k8s-version-684625 kubelet[661]: E0217 13:21:29.507037     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.843241 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:31 old-k8s-version-684625 kubelet[661]: E0217 13:21:31.506506     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.843574 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:40 old-k8s-version-684625 kubelet[661]: E0217 13:21:40.506099     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.846096 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:46 old-k8s-version-684625 kubelet[661]: E0217 13:21:46.514608     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0217 13:24:12.846703 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:53 old-k8s-version-684625 kubelet[661]: E0217 13:21:53.486312     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.847034 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:55 old-k8s-version-684625 kubelet[661]: E0217 13:21:55.014989     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.847224 2295157 logs.go:138] Found kubelet problem: Feb 17 13:21:57 old-k8s-version-684625 kubelet[661]: E0217 13:21:57.513924     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.847555 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:05 old-k8s-version-684625 kubelet[661]: E0217 13:22:05.506670     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.847744 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:10 old-k8s-version-684625 kubelet[661]: E0217 13:22:10.506686     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.848077 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:16 old-k8s-version-684625 kubelet[661]: E0217 13:22:16.506151     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.848261 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:21 old-k8s-version-684625 kubelet[661]: E0217 13:22:21.506684     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.848591 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:27 old-k8s-version-684625 kubelet[661]: E0217 13:22:27.511459     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.848779 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:33 old-k8s-version-684625 kubelet[661]: E0217 13:22:33.506664     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.849108 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:40 old-k8s-version-684625 kubelet[661]: E0217 13:22:40.506352     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.849294 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:48 old-k8s-version-684625 kubelet[661]: E0217 13:22:48.506643     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.849623 2295157 logs.go:138] Found kubelet problem: Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: E0217 13:22:52.506276     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.849820 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:02 old-k8s-version-684625 kubelet[661]: E0217 13:23:02.506740     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.850159 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: E0217 13:23:03.506425     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.850490 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511407     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.850675 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511756     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.850860 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:27 old-k8s-version-684625 kubelet[661]: E0217 13:23:27.506707     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.851189 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.851375 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.851749 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.851935 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.852266 2295157 logs.go:138] Found kubelet problem: Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:12.852452 2295157 logs.go:138] Found kubelet problem: Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:12.852781 2295157 logs.go:138] Found kubelet problem: Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	I0217 13:24:12.852792 2295157 logs.go:123] Gathering logs for dmesg ...
	I0217 13:24:12.852807 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0217 13:24:12.871313 2295157 logs.go:123] Gathering logs for describe nodes ...
	I0217 13:24:12.871339 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0217 13:24:13.013482 2295157 logs.go:123] Gathering logs for kube-proxy [b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5] ...
	I0217 13:24:13.013515 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5"
	I0217 13:24:13.069243 2295157 logs.go:123] Gathering logs for kube-controller-manager [eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9] ...
	I0217 13:24:13.069277 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9"
	I0217 13:24:13.142515 2295157 logs.go:123] Gathering logs for etcd [6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3] ...
	I0217 13:24:13.142551 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3"
	I0217 13:24:13.188107 2295157 logs.go:123] Gathering logs for coredns [d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994] ...
	I0217 13:24:13.188137 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994"
	I0217 13:24:13.244205 2295157 logs.go:123] Gathering logs for kube-scheduler [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c] ...
	I0217 13:24:13.244234 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c"
	I0217 13:24:13.284869 2295157 logs.go:123] Gathering logs for containerd ...
	I0217 13:24:13.284996 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0217 13:24:13.345298 2295157 logs.go:123] Gathering logs for container status ...
	I0217 13:24:13.345337 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0217 13:24:13.399215 2295157 logs.go:123] Gathering logs for kube-scheduler [50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c] ...
	I0217 13:24:13.399249 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c"
	I0217 13:24:13.450736 2295157 logs.go:123] Gathering logs for kube-proxy [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e] ...
	I0217 13:24:13.450767 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e"
	I0217 13:24:13.495231 2295157 logs.go:123] Gathering logs for kindnet [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d] ...
	I0217 13:24:13.495258 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d"
	I0217 13:24:13.547422 2295157 logs.go:123] Gathering logs for kindnet [bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767] ...
	I0217 13:24:13.547458 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767"
	I0217 13:24:13.590826 2295157 logs.go:123] Gathering logs for kubernetes-dashboard [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42] ...
	I0217 13:24:13.590857 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42"
	I0217 13:24:13.632177 2295157 logs.go:123] Gathering logs for kube-apiserver [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d] ...
	I0217 13:24:13.632207 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d"
	I0217 13:24:13.704939 2295157 logs.go:123] Gathering logs for kube-apiserver [b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9] ...
	I0217 13:24:13.704976 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9"
	I0217 13:24:13.778637 2295157 logs.go:123] Gathering logs for etcd [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b] ...
	I0217 13:24:13.778676 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b"
	I0217 13:24:13.831400 2295157 logs.go:123] Gathering logs for coredns [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790] ...
	I0217 13:24:13.831433 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790"
	I0217 13:24:13.874217 2295157 logs.go:123] Gathering logs for kube-controller-manager [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044] ...
	I0217 13:24:13.874245 2295157 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044"
	I0217 13:24:13.943790 2295157 out.go:358] Setting ErrFile to fd 2...
	I0217 13:24:13.943825 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0217 13:24:13.943907 2295157 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0217 13:24:13.943922 2295157 out.go:270]   Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	  Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:13.943954 2295157 out.go:270]   Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:13.943983 2295157 out.go:270]   Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	  Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	W0217 13:24:13.943990 2295157 out.go:270]   Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0217 13:24:13.944001 2295157 out.go:270]   Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	  Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	I0217 13:24:13.944009 2295157 out.go:358] Setting ErrFile to fd 2...
	I0217 13:24:13.944021 2295157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:24:23.946621 2295157 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0217 13:24:23.960217 2295157 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0217 13:24:23.967095 2295157 out.go:201] 
	W0217 13:24:23.971061 2295157 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0217 13:24:23.971104 2295157 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0217 13:24:23.971123 2295157 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0217 13:24:23.971131 2295157 out.go:270] * 
	* 
	W0217 13:24:23.972751 2295157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 13:24:23.977978 2295157 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-684625
helpers_test.go:235: (dbg) docker inspect old-k8s-version-684625:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8",
	        "Created": "2025-02-17T13:15:16.832992873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2295362,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-17T13:18:09.108632625Z",
	            "FinishedAt": "2025-02-17T13:18:07.986084533Z"
	        },
	        "Image": "sha256:86f383d95829214691bb905fe90945d8bf2efbbe5a717e0830a616744d143ec9",
	        "ResolvConfPath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/hosts",
	        "LogPath": "/var/lib/docker/containers/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8/78c38b595a8d46e40e20cddeddc89f1c4a55713f92526adf8aea24cfc29916e8-json.log",
	        "Name": "/old-k8s-version-684625",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-684625:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-684625",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e-init/diff:/var/lib/docker/overlay2/5eaadba9a34de38da1deed5c4698d3c65d1f3362c3f4e979e5616b492b5ac54b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/656f40af279bee00d01959a503f12e2832686984a4b4f4d0d850a259ec44683e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-684625",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-684625/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-684625",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-684625",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-684625",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da4d36c193fe8f255abf5417e46afec793ff35154af12a474fab28ec4aea3e21",
	            "SandboxKey": "/var/run/docker/netns/da4d36c193fe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50067"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50068"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50071"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50069"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50070"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-684625": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c93d241abfa61e025ce640ad53ad79167f795177b575abdef5a18aa9a5aefda6",
	                    "EndpointID": "6e1c25bd9e3ac46a1378fe8b47dabda3688f60aaead749e475ee1fe13e216cd4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-684625",
	                        "78c38b595a8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684625 -n old-k8s-version-684625
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-684625 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-684625 logs -n 25: (3.69144496s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-717393                              | cert-expiration-717393   | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:14 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-461736                               | force-systemd-env-461736 | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:14 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-461736                            | force-systemd-env-461736 | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:14 UTC |
	| start   | -p cert-options-592751                                 | cert-options-592751      | jenkins | v1.35.0 | 17 Feb 25 13:14 UTC | 17 Feb 25 13:15 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-592751 ssh                                | cert-options-592751      | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:15 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-592751 -- sudo                         | cert-options-592751      | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:15 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-592751                                 | cert-options-592751      | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:15 UTC |
	| start   | -p old-k8s-version-684625                              | old-k8s-version-684625   | jenkins | v1.35.0 | 17 Feb 25 13:15 UTC | 17 Feb 25 13:17 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-717393                              | cert-expiration-717393   | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:17 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-717393                              | cert-expiration-717393   | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:17 UTC |
	| addons  | enable metrics-server -p old-k8s-version-684625        | old-k8s-version-684625   | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| start   | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:19 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-684625                              | old-k8s-version-684625   | jenkins | v1.35.0 | 17 Feb 25 13:17 UTC | 17 Feb 25 13:18 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-684625             | old-k8s-version-684625   | jenkins | v1.35.0 | 17 Feb 25 13:18 UTC | 17 Feb 25 13:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-684625                              | old-k8s-version-684625   | jenkins | v1.35.0 | 17 Feb 25 13:18 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-695080             | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:19 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-695080                  | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:19 UTC | 17 Feb 25 13:24 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| image   | no-preload-695080 image list                           | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
	| delete  | -p no-preload-695080                                   | no-preload-695080        | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC | 17 Feb 25 13:24 UTC |
	| start   | -p embed-certs-652383                                  | embed-certs-652383       | jenkins | v1.35.0 | 17 Feb 25 13:24 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 13:24:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 13:24:20.836692 2306840 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:24:20.836860 2306840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:24:20.836883 2306840 out.go:358] Setting ErrFile to fd 2...
	I0217 13:24:20.836905 2306840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:24:20.837155 2306840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 13:24:20.837629 2306840 out.go:352] Setting JSON to false
	I0217 13:24:20.838993 2306840 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":309824,"bootTime":1739488837,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 13:24:20.840826 2306840 start.go:139] virtualization:  
	I0217 13:24:20.845551 2306840 out.go:177] * [embed-certs-652383] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 13:24:20.849134 2306840 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 13:24:20.849173 2306840 notify.go:220] Checking for updates...
	I0217 13:24:20.855838 2306840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 13:24:20.859153 2306840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 13:24:20.862304 2306840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 13:24:20.865376 2306840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 13:24:20.868510 2306840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 13:24:20.872689 2306840 config.go:182] Loaded profile config "old-k8s-version-684625": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0217 13:24:20.872814 2306840 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 13:24:20.903691 2306840 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 13:24:20.903823 2306840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:24:20.970881 2306840 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-17 13:24:20.960875209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:24:20.970992 2306840 docker.go:318] overlay module found
	I0217 13:24:20.974323 2306840 out.go:177] * Using the docker driver based on user configuration
	I0217 13:24:20.977327 2306840 start.go:297] selected driver: docker
	I0217 13:24:20.977344 2306840 start.go:901] validating driver "docker" against <nil>
	I0217 13:24:20.977359 2306840 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 13:24:20.978201 2306840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:24:21.033855 2306840 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-17 13:24:21.024188248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:24:21.034063 2306840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 13:24:21.034302 2306840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0217 13:24:21.037324 2306840 out.go:177] * Using Docker driver with root privileges
	I0217 13:24:21.040287 2306840 cni.go:84] Creating CNI manager for ""
	I0217 13:24:21.040356 2306840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0217 13:24:21.040369 2306840 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0217 13:24:21.040455 2306840 start.go:340] cluster config:
	{Name:embed-certs-652383 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-652383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 13:24:21.045401 2306840 out.go:177] * Starting "embed-certs-652383" primary control-plane node in "embed-certs-652383" cluster
	I0217 13:24:21.048294 2306840 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0217 13:24:21.056462 2306840 out.go:177] * Pulling base image v0.0.46-1739182054-20387 ...
	I0217 13:24:21.059573 2306840 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0217 13:24:21.059635 2306840 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0217 13:24:21.059650 2306840 cache.go:56] Caching tarball of preloaded images
	I0217 13:24:21.059730 2306840 preload.go:172] Found /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0217 13:24:21.059745 2306840 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0217 13:24:21.059854 2306840 profile.go:143] Saving config to /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/embed-certs-652383/config.json ...
	I0217 13:24:21.059880 2306840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/embed-certs-652383/config.json: {Name:mk61c1932965c859c44b5216cb9678a521748b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0217 13:24:21.059975 2306840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0217 13:24:21.080739 2306840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon, skipping pull
	I0217 13:24:21.080765 2306840 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad exists in daemon, skipping load
	I0217 13:24:21.080784 2306840 cache.go:230] Successfully downloaded all kic artifacts
	I0217 13:24:21.080818 2306840 start.go:360] acquireMachinesLock for embed-certs-652383: {Name:mkcc625f379313cf6c4b4962258434670251f4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0217 13:24:21.080940 2306840 start.go:364] duration metric: took 101.134µs to acquireMachinesLock for "embed-certs-652383"
	I0217 13:24:21.080977 2306840 start.go:93] Provisioning new machine with config: &{Name:embed-certs-652383 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-652383 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0217 13:24:21.081047 2306840 start.go:125] createHost starting for "" (driver="docker")
	I0217 13:24:23.946621 2295157 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0217 13:24:23.960217 2295157 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0217 13:24:23.967095 2295157 out.go:201] 
	W0217 13:24:23.971061 2295157 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0217 13:24:23.971104 2295157 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0217 13:24:23.971123 2295157 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0217 13:24:23.971131 2295157 out.go:270] * 
	W0217 13:24:23.972751 2295157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0217 13:24:23.977978 2295157 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a4c0e20b96ef0       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   03bb8f6997fb2       dashboard-metrics-scraper-8d5bb5db8-6p4sg
	9743cccc1e113       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   d1fc1eee4cea5       storage-provisioner
	21d12e92bdc34       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   cd7daf648a193       kubernetes-dashboard-cd95d586-bpfhq
	8d57d7ac631a1       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   0b380ce568da1       kube-proxy-xhtkg
	025f5094ecf9c       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   0f5e23afd2053       busybox
	758a5a1373a2d       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   d1fc1eee4cea5       storage-provisioner
	1bfdc8d63afe5       ee75e27fff91c       5 minutes ago       Running             kindnet-cni                 1                   4863a79d92778       kindnet-d7wd6
	7aa43c123ca5c       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   81b520af2936c       coredns-74ff55c5b-hbrnk
	1d1af565585c6       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   09215a26df46e       kube-apiserver-old-k8s-version-684625
	153a58e15e3c4       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   b0c21468fff4b       kube-controller-manager-old-k8s-version-684625
	4f05943415698       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   da7b90963f8c9       kube-scheduler-old-k8s-version-684625
	8aa69534f9958       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   78a6d684f6c7b       etcd-old-k8s-version-684625
	656af4b962d57       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   87ae35fa80299       busybox
	d2fbdfba3ef99       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   cdfe1b3f40470       coredns-74ff55c5b-hbrnk
	bab8f4d6f0ee4       ee75e27fff91c       8 minutes ago       Exited              kindnet-cni                 0                   1d954be6de071       kindnet-d7wd6
	b1f911e5c971d       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   17c05ca7096a9       kube-proxy-xhtkg
	eb52e41d1f229       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   88ed6c99dd4a8       kube-controller-manager-old-k8s-version-684625
	b6ca4124b9d04       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   a79e4e6db6658       kube-apiserver-old-k8s-version-684625
	6fb5b4bd5f9ac       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   ebf7d4d1dc213       etcd-old-k8s-version-684625
	50badd161aa11       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   095024c3d08f3       kube-scheduler-old-k8s-version-684625
	
	
	==> containerd <==
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.509044711Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.530365532Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\""
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.531018707Z" level=info msg="StartContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\""
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.601311302Z" level=info msg="StartContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" returns successfully"
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.601474547Z" level=info msg="received exit event container_id:\"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" id:\"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" pid:3028 exit_status:255 exited_at:{seconds:1739798431 nanos:601173403}"
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.629002412Z" level=info msg="shim disconnected" id=3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462 namespace=k8s.io
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.629064523Z" level=warning msg="cleaning up after shim disconnected" id=3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462 namespace=k8s.io
	Feb 17 13:20:31 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:31.629074163Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 17 13:20:32 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:32.260860223Z" level=info msg="RemoveContainer for \"fee65f05320f0d9e7201f62b22ef81f5b9f93d140110f1e972572efa4b0ad5d1\""
	Feb 17 13:20:32 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:20:32.272975973Z" level=info msg="RemoveContainer for \"fee65f05320f0d9e7201f62b22ef81f5b9f93d140110f1e972572efa4b0ad5d1\" returns successfully"
	Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.507061731Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.512048717Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.514139498Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Feb 17 13:21:46 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:46.514177503Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.508371695Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.528769095Z" level=info msg="CreateContainer within sandbox \"03bb8f6997fb2ddf3aa11304e7d38cc6a7de732702d817967ee8226f4f56252c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\""
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.529924591Z" level=info msg="StartContainer for \"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\""
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.592552264Z" level=info msg="StartContainer for \"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\" returns successfully"
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.595616613Z" level=info msg="received exit event container_id:\"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\" id:\"a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe\" pid:3260 exit_status:255 exited_at:{seconds:1739798512 nanos:594860441}"
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.620687638Z" level=info msg="shim disconnected" id=a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe namespace=k8s.io
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.620778088Z" level=warning msg="cleaning up after shim disconnected" id=a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe namespace=k8s.io
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.620791676Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 17 13:21:52 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:52.634178885Z" level=warning msg="cleanup warnings time=\"2025-02-17T13:21:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Feb 17 13:21:53 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:53.490081848Z" level=info msg="RemoveContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\""
	Feb 17 13:21:53 old-k8s-version-684625 containerd[567]: time="2025-02-17T13:21:53.496790558Z" level=info msg="RemoveContainer for \"3815654f56999b8891fa3ac4ebe0a12c6630900cb1fb95e31ff4c5dc61e0a462\" returns successfully"
	
	
	==> coredns [7aa43c123ca5c8ee16024ce390f643f3333b13fc862bf96225319c34bd675790] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:50081 - 62598 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NOERROR - 0 6.000718692s
	[ERROR] plugin/errors: 2 7539283703582014816.3843172251095661444. HINFO: read udp 10.244.0.4:45310->192.168.85.1:53: i/o timeout
	[INFO] 127.0.0.1:33984 - 33744 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 4.00474647s
	[INFO] 127.0.0.1:47217 - 34059 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.003237407s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:49819 - 61104 "HINFO IN 7539283703582014816.3843172251095661444. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013633894s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0217 13:19:16.501878       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-17 13:18:46.501319316 +0000 UTC m=+0.093551390) (total time: 30.000460413s):
	Trace[939984059]: [30.000460413s] [30.000460413s] END
	E0217 13:19:16.501904       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0217 13:19:16.502095       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-17 13:18:46.501835485 +0000 UTC m=+0.094067560) (total time: 30.000248472s):
	Trace[1474941318]: [30.000248472s] [30.000248472s] END
	E0217 13:19:16.502101       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0217 13:19:16.502163       1 trace.go:116] Trace[140954425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-17 13:18:46.495778299 +0000 UTC m=+0.088010374) (total time: 30.006376015s):
	Trace[140954425]: [30.006376015s] [30.006376015s] END
	E0217 13:19:16.502168       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d2fbdfba3ef99543c61ad6cef772fc3a5b7a646c8260a21633878f8e85b54994] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:44187 - 16515 "HINFO IN 160704174796526169.2174922180872160481. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.03576722s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-684625
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-684625
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d5460083481c20438a5263486cb626e4191c2126
	                    minikube.k8s.io/name=old-k8s-version-684625
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_17T13_15_56_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Feb 2025 13:15:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-684625
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Feb 2025 13:24:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Feb 2025 13:19:32 +0000   Mon, 17 Feb 2025 13:15:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Feb 2025 13:19:32 +0000   Mon, 17 Feb 2025 13:15:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Feb 2025 13:19:32 +0000   Mon, 17 Feb 2025 13:15:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Feb 2025 13:19:32 +0000   Mon, 17 Feb 2025 13:16:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-684625
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 631c480c8f0f434f8d5713e5c84e7653
	  System UUID:                3bb971ce-bb5d-4937-b0c4-fc32579828e1
	  Boot ID:                    f9f324bd-030b-4f03-bce8-fdc4ef2922d9
	  Kernel Version:             5.15.0-1077-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 coredns-74ff55c5b-hbrnk                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m14s
	  kube-system                 etcd-old-k8s-version-684625                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m22s
	  kube-system                 kindnet-d7wd6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m14s
	  kube-system                 kube-apiserver-old-k8s-version-684625             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-controller-manager-old-k8s-version-684625    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-proxy-xhtkg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-scheduler-old-k8s-version-684625             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 metrics-server-9975d5f86-bj72q                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m31s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-6p4sg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-bpfhq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m42s (x4 over 8m42s)  kubelet     Node old-k8s-version-684625 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s (x5 over 8m42s)  kubelet     Node old-k8s-version-684625 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m42s (x4 over 8m42s)  kubelet     Node old-k8s-version-684625 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m23s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m23s                  kubelet     Node old-k8s-version-684625 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s                  kubelet     Node old-k8s-version-684625 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s                  kubelet     Node old-k8s-version-684625 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m14s                  kubelet     Node old-k8s-version-684625 status is now: NodeReady
	  Normal  Starting                 8m13s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m1s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-684625 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-684625 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)    kubelet     Node old-k8s-version-684625 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m29s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [6fb5b4bd5f9ac7a040dcad6928caa1b3967e2dd681c09a9423985a1fb46f7dd3] <==
	raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2025/02/17 13:15:45 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2025/02/17 13:15:45 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2025-02-17 13:15:45.834562 I | etcdserver: setting up the initial cluster version to 3.4
	2025-02-17 13:15:45.834887 I | etcdserver: published {Name:old-k8s-version-684625 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2025-02-17 13:15:45.835128 I | embed: ready to serve client requests
	2025-02-17 13:15:45.836651 I | embed: serving client requests on 192.168.85.2:2379
	2025-02-17 13:15:45.842502 I | embed: ready to serve client requests
	2025-02-17 13:15:45.846842 I | embed: serving client requests on 127.0.0.1:2379
	2025-02-17 13:15:45.847656 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-02-17 13:15:45.916322 I | etcdserver/api: enabled capabilities for version 3.4
	2025-02-17 13:16:08.500676 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:16:09.604266 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:16:19.604200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:16:29.604163 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:16:39.604185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:16:49.604338 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:16:59.604334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:17:09.604282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:17:19.604193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:17:29.604138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:17:39.604381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:17:49.604390 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [8aa69534f9958225d2f2b3307d50f0441f9d86a346225ab80b37c88dd5e3f36b] <==
	2025-02-17 13:20:17.920818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:20:27.920712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:20:37.920812 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:20:47.920712 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:20:57.920760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:21:07.920675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:21:17.920738 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:21:27.920684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:21:37.920752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:21:47.920670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:21:57.920953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:22:07.920796 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:22:17.920711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:22:27.920657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:22:37.920790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:22:47.920822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:22:57.920807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:23:07.920660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:23:17.920832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:23:27.920824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:23:37.920741 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:23:47.920935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:23:57.920880 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:24:07.921055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-17 13:24:17.920894 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 13:24:26 up 3 days, 14:03,  0 users,  load average: 0.49, 1.73, 2.39
	Linux old-k8s-version-684625 5.15.0-1077-aws #84~20.04.1-Ubuntu SMP Mon Jan 20 22:14:27 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1bfdc8d63afe5fa71712c71c5c1aacceed3dafda653b9d1752367504f061fc6d] <==
	I0217 13:22:24.167710       1 main.go:301] handling current node
	I0217 13:22:34.163631       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:22:34.163666       1 main.go:301] handling current node
	I0217 13:22:44.167487       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:22:44.167519       1 main.go:301] handling current node
	I0217 13:22:54.159595       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:22:54.159647       1 main.go:301] handling current node
	I0217 13:23:04.165756       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:23:04.165792       1 main.go:301] handling current node
	I0217 13:23:14.165746       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:23:14.165780       1 main.go:301] handling current node
	I0217 13:23:24.165751       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:23:24.165788       1 main.go:301] handling current node
	I0217 13:23:34.158780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:23:34.158816       1 main.go:301] handling current node
	I0217 13:23:44.166637       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:23:44.166678       1 main.go:301] handling current node
	I0217 13:23:54.159243       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:23:54.159282       1 main.go:301] handling current node
	I0217 13:24:04.165763       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:24:04.165798       1 main.go:301] handling current node
	I0217 13:24:14.167020       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:24:14.167246       1 main.go:301] handling current node
	I0217 13:24:24.165735       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:24:24.165768       1 main.go:301] handling current node
	
	
	==> kindnet [bab8f4d6f0ee4f9a1abcfde790eef766d71739b3cc47f67c74f614cc1af1f767] <==
	I0217 13:16:15.733890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0217 13:16:16.050721       1 controller.go:361] Starting controller kube-network-policies
	I0217 13:16:16.050751       1 controller.go:365] Waiting for informer caches to sync
	I0217 13:16:16.050757       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0217 13:16:16.251050       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0217 13:16:16.251079       1 metrics.go:61] Registering metrics
	I0217 13:16:16.251315       1 controller.go:401] Syncing nftables rules
	I0217 13:16:26.050544       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:16:26.050735       1 main.go:301] handling current node
	I0217 13:16:36.050547       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:16:36.050584       1 main.go:301] handling current node
	I0217 13:16:46.053945       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:16:46.054002       1 main.go:301] handling current node
	I0217 13:16:56.057783       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:16:56.057819       1 main.go:301] handling current node
	I0217 13:17:06.059014       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:17:06.059136       1 main.go:301] handling current node
	I0217 13:17:16.050815       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:17:16.050852       1 main.go:301] handling current node
	I0217 13:17:26.053751       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:17:26.053784       1 main.go:301] handling current node
	I0217 13:17:36.053783       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:17:36.053829       1 main.go:301] handling current node
	I0217 13:17:46.052556       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0217 13:17:46.052595       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d1af565585c63854b5c243e7af906936cc9eeb60c615bf0689d126f80c7d61d] <==
	I0217 13:21:07.214981       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:21:07.215012       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0217 13:21:42.368003       1 handler_proxy.go:102] no RequestInfo found in the context
	E0217 13:21:42.368104       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0217 13:21:42.368121       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0217 13:21:45.048783       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:21:45.048863       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:21:45.048875       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0217 13:22:27.289605       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:22:27.289647       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:22:27.289656       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0217 13:23:01.402539       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:23:01.402782       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:23:01.402880       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0217 13:23:39.025108       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:23:39.025158       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:23:39.025168       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0217 13:23:40.204093       1 handler_proxy.go:102] no RequestInfo found in the context
	E0217 13:23:40.204169       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0217 13:23:40.204310       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0217 13:24:15.580647       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:24:15.580701       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:24:15.580722       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [b6ca4124b9d0433924cd320e9bc5c6b1f345031f9b6bb0c9c7c97ae40afbcce9] <==
	I0217 13:15:53.153522       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0217 13:15:53.153856       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0217 13:15:53.322201       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0217 13:15:53.334426       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0217 13:15:53.334903       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0217 13:15:53.690522       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0217 13:15:53.742112       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0217 13:15:53.886403       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0217 13:15:53.887533       1 controller.go:606] quota admission added evaluator for: endpoints
	I0217 13:15:53.891183       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0217 13:15:54.876300       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0217 13:15:55.415858       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0217 13:15:55.477012       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0217 13:16:03.909112       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0217 13:16:12.340576       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0217 13:16:12.506256       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0217 13:16:17.097467       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:16:17.097511       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:16:17.097519       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0217 13:16:51.794759       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:16:51.794842       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:16:51.794890       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0217 13:17:28.253902       1 client.go:360] parsed scheme: "passthrough"
	I0217 13:17:28.253945       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0217 13:17:28.253954       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [153a58e15e3c4dc66a3d5fc3bf3ef0318439dfc65cc72009789764d486ba1044] <==
	W0217 13:20:03.725231       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:20:29.770210       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:20:35.375650       1 request.go:655] Throttling request took 1.047751514s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0217 13:20:36.227112       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:21:00.272132       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:21:07.877718       1 request.go:655] Throttling request took 1.047359447s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0217 13:21:08.729497       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:21:30.774476       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:21:40.380042       1 request.go:655] Throttling request took 1.048372425s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0217 13:21:41.231456       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:22:01.276901       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:22:12.882031       1 request.go:655] Throttling request took 1.048449937s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0217 13:22:13.733392       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:22:31.778662       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:22:45.383969       1 request.go:655] Throttling request took 1.048065922s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0217 13:22:46.235405       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:23:02.280537       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:23:17.885764       1 request.go:655] Throttling request took 1.047753683s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W0217 13:23:18.739822       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:23:32.782562       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:23:50.390320       1 request.go:655] Throttling request took 1.047969586s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0217 13:23:51.241835       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0217 13:24:03.284334       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0217 13:24:22.892284       1 request.go:655] Throttling request took 1.046970975s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0217 13:24:23.743857       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [eb52e41d1f2297c683254369e047c39a6a479279c66d29b50be1fb4f255a9ed9] <==
	I0217 13:16:12.546084       1 shared_informer.go:247] Caches are synced for resource quota 
	I0217 13:16:12.549472       1 shared_informer.go:247] Caches are synced for attach detach 
	I0217 13:16:12.555975       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0217 13:16:12.574515       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0217 13:16:12.590384       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0217 13:16:12.590513       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0217 13:16:12.590921       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0217 13:16:12.590950       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0217 13:16:12.590970       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0217 13:16:12.590992       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0217 13:16:12.591042       1 shared_informer.go:247] Caches are synced for resource quota 
	I0217 13:16:12.634197       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-g6vph"
	I0217 13:16:12.717892       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-hbrnk"
	I0217 13:16:12.721806       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0217 13:16:12.776250       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dacb3042-be21-40a8-bf08-b00f12f5856b", ResourceVersion:"281", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63875394956, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20250214-acbabc1a\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b92c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b92c80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b92ca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b92cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b92ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b92d00), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20250214-acbabc1a", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b92d20)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b92d60)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b7f3e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d1b608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a36af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000347e30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d1b660)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0217 13:16:12.971326       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0217 13:16:12.971350       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0217 13:16:13.021927       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0217 13:16:14.035377       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0217 13:16:14.078471       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-g6vph"
	I0217 13:16:17.328336       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0217 13:17:54.144915       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0217 13:17:54.198623       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0217 13:17:54.244456       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0217 13:17:54.352740       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [8d57d7ac631a1acf36b914c8d19940b69c073bef88c6905c15b4965fab02d15e] <==
	I0217 13:18:57.745280       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0217 13:18:57.745611       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0217 13:18:57.764994       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0217 13:18:57.765284       1 server_others.go:185] Using iptables Proxier.
	I0217 13:18:57.765864       1 server.go:650] Version: v1.20.0
	I0217 13:18:57.766565       1 config.go:224] Starting endpoint slice config controller
	I0217 13:18:57.766685       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0217 13:18:57.766986       1 config.go:315] Starting service config controller
	I0217 13:18:57.767085       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0217 13:18:57.866922       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0217 13:18:57.867235       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [b1f911e5c971da34f6431f138860ea47ba7df67785c9a20b9352a1c8e33823d5] <==
	I0217 13:16:13.416476       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0217 13:16:13.416571       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0217 13:16:13.531298       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0217 13:16:13.531446       1 server_others.go:185] Using iptables Proxier.
	I0217 13:16:13.531896       1 server.go:650] Version: v1.20.0
	I0217 13:16:13.532536       1 config.go:315] Starting service config controller
	I0217 13:16:13.532544       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0217 13:16:13.532561       1 config.go:224] Starting endpoint slice config controller
	I0217 13:16:13.532564       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0217 13:16:13.632643       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0217 13:16:13.632716       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [4f0594341569838b4d7a9066ad968b46c9a938399c2c51f0521563d7af65df7c] <==
	I0217 13:18:33.501477       1 serving.go:331] Generated self-signed cert in-memory
	I0217 13:18:40.893895       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0217 13:18:40.894800       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0217 13:18:40.895440       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0217 13:18:40.895543       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0217 13:18:40.896019       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0217 13:18:40.896116       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0217 13:18:40.894279       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0217 13:18:40.894303       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0217 13:18:40.995112       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0217 13:18:40.996199       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0217 13:18:40.996359       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	
	==> kube-scheduler [50badd161aa11e46b27fbde357ffcfee26108453cbd1a48c4202fa69c832d12c] <==
	I0217 13:15:48.089376       1 serving.go:331] Generated self-signed cert in-memory
	W0217 13:15:52.390378       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0217 13:15:52.390796       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0217 13:15:52.390950       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0217 13:15:52.391079       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0217 13:15:52.449343       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0217 13:15:52.449654       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0217 13:15:52.451289       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0217 13:15:52.465379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0217 13:15:52.465489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0217 13:15:52.465559       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0217 13:15:52.465624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0217 13:15:52.473566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0217 13:15:52.473615       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0217 13:15:52.474789       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0217 13:15:52.479377       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0217 13:15:52.488374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0217 13:15:52.488671       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0217 13:15:52.489171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0217 13:15:52.489451       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0217 13:15:52.489566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0217 13:15:53.474304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0217 13:15:53.479782       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0217 13:15:54.051459       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Feb 17 13:22:40 old-k8s-version-684625 kubelet[661]: E0217 13:22:40.506352     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:22:48 old-k8s-version-684625 kubelet[661]: E0217 13:22:48.506643     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: I0217 13:22:52.505923     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:22:52 old-k8s-version-684625 kubelet[661]: E0217 13:22:52.506276     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:23:02 old-k8s-version-684625 kubelet[661]: E0217 13:23:02.506740     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: I0217 13:23:03.505874     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:23:03 old-k8s-version-684625 kubelet[661]: E0217 13:23:03.506425     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: I0217 13:23:15.510665     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511407     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:23:15 old-k8s-version-684625 kubelet[661]: E0217 13:23:15.511756     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:23:27 old-k8s-version-684625 kubelet[661]: E0217 13:23:27.506707     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: I0217 13:23:30.505830     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:23:30 old-k8s-version-684625 kubelet[661]: E0217 13:23:30.506203     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:23:39 old-k8s-version-684625 kubelet[661]: E0217 13:23:39.506666     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: I0217 13:23:44.505888     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:23:44 old-k8s-version-684625 kubelet[661]: E0217 13:23:44.506275     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:23:54 old-k8s-version-684625 kubelet[661]: E0217 13:23:54.506806     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: I0217 13:23:56.505740     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:23:56 old-k8s-version-684625 kubelet[661]: E0217 13:23:56.506539     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.506785     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: I0217 13:24:07.507285     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:24:07 old-k8s-version-684625 kubelet[661]: E0217 13:24:07.507625     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:24:20 old-k8s-version-684625 kubelet[661]: I0217 13:24:20.505921     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: a4c0e20b96ef0908ec0e312d1f339ce400f201fd3f4dc50064f3d638a85788fe
	Feb 17 13:24:20 old-k8s-version-684625 kubelet[661]: E0217 13:24:20.507003     661 pod_workers.go:191] Error syncing pod d3e7918c-9931-44bb-bd2c-17b4a717ba53 ("dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p4sg_kubernetes-dashboard(d3e7918c-9931-44bb-bd2c-17b4a717ba53)"
	Feb 17 13:24:21 old-k8s-version-684625 kubelet[661]: E0217 13:24:21.506691     661 pod_workers.go:191] Error syncing pod 1ae4944b-aed9-4676-b04f-b07146544af0 ("metrics-server-9975d5f86-bj72q_kube-system(1ae4944b-aed9-4676-b04f-b07146544af0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [21d12e92bdc34f4eb089a594d382622cdd7bdce444dde0266c8b4fdd1e0ecd42] <==
	2025/02/17 13:19:07 Using namespace: kubernetes-dashboard
	2025/02/17 13:19:07 Using in-cluster config to connect to apiserver
	2025/02/17 13:19:07 Using secret token for csrf signing
	2025/02/17 13:19:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/17 13:19:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/17 13:19:07 Successful initial request to the apiserver, version: v1.20.0
	2025/02/17 13:19:07 Generating JWE encryption key
	2025/02/17 13:19:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/17 13:19:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/17 13:19:08 Initializing JWE encryption key from synchronized object
	2025/02/17 13:19:08 Creating in-cluster Sidecar client
	2025/02/17 13:19:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:19:08 Serving insecurely on HTTP port: 9090
	2025/02/17 13:19:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:20:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:20:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:21:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:21:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:22:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:22:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:23:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:23:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:24:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/17 13:19:07 Starting overwatch
	
	
	==> storage-provisioner [758a5a1373a2d24baaddbf9318059fa25c272bf1df9cce967ae2f43c79f87c4f] <==
	I0217 13:18:54.603472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0217 13:19:24.605517       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9743cccc1e1132185b91405b4c36a8b1e644bbc3103aee415b84291d7c8ff5a6] <==
	I0217 13:20:05.612637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0217 13:20:05.627722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0217 13:20:05.627903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0217 13:20:23.154282       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0217 13:20:23.154727       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-684625_55565854-9ca0-4b19-8f32-bf3332fe1135!
	I0217 13:20:23.157726       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c35428c6-272d-42a7-b9d2-a4f0095100b5", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-684625_55565854-9ca0-4b19-8f32-bf3332fe1135 became leader
	I0217 13:20:23.257071       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-684625_55565854-9ca0-4b19-8f32-bf3332fe1135!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684625 -n old-k8s-version-684625
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-684625 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-bj72q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-684625 describe pod metrics-server-9975d5f86-bj72q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-684625 describe pod metrics-server-9975d5f86-bj72q: exit status 1 (108.346314ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-bj72q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-684625 describe pod metrics-server-9975d5f86-bj72q: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.62s)

                                                
                                    

Test pass (300/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.33
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.1/json-events 4.68
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.1
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 154.08
29 TestAddons/serial/Volcano 40.11
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.12
35 TestAddons/parallel/Registry 15.87
36 TestAddons/parallel/Ingress 19.35
37 TestAddons/parallel/InspektorGadget 11.15
38 TestAddons/parallel/MetricsServer 5.84
40 TestAddons/parallel/CSI 53.33
41 TestAddons/parallel/Headlamp 17.13
42 TestAddons/parallel/CloudSpanner 5.61
43 TestAddons/parallel/LocalPath 52.01
44 TestAddons/parallel/NvidiaDevicePlugin 6.54
45 TestAddons/parallel/Yakd 11.83
47 TestAddons/StoppedEnableDisable 12.24
48 TestCertOptions 38.1
49 TestCertExpiration 230.22
51 TestForceSystemdFlag 51.51
52 TestForceSystemdEnv 40.45
53 TestDockerEnvContainerd 46.59
58 TestErrorSpam/setup 30.69
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.2
61 TestErrorSpam/pause 1.9
62 TestErrorSpam/unpause 1.97
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 48.73
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.8
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
75 TestFunctional/serial/CacheCmd/cache/add_local 1.35
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 45.92
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.78
86 TestFunctional/serial/LogsFileCmd 1.82
87 TestFunctional/serial/InvalidService 5.02
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 13.87
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.31
97 TestFunctional/parallel/ServiceCmdConnect 11.67
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 25.89
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.54
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.26
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
127 TestFunctional/parallel/ServiceCmd/List 0.6
128 TestFunctional/parallel/ProfileCmd/profile_list 0.56
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
132 TestFunctional/parallel/MountCmd/any-port 7.42
133 TestFunctional/parallel/ServiceCmd/Format 0.46
134 TestFunctional/parallel/ServiceCmd/URL 0.45
135 TestFunctional/parallel/MountCmd/specific-port 1.79
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.23
137 TestFunctional/parallel/Version/short 0.1
138 TestFunctional/parallel/Version/components 1.42
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.88
144 TestFunctional/parallel/ImageCommands/Setup 0.82
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.26
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 116.41
163 TestMultiControlPlane/serial/DeployApp 34.08
164 TestMultiControlPlane/serial/PingHostFromPods 1.73
165 TestMultiControlPlane/serial/AddWorkerNode 23.08
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
168 TestMultiControlPlane/serial/CopyFile 19.47
169 TestMultiControlPlane/serial/StopSecondaryNode 12.82
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
171 TestMultiControlPlane/serial/RestartSecondaryNode 18.87
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.68
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.69
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 35.91
177 TestMultiControlPlane/serial/RestartCluster 63.92
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 42.76
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
184 TestJSONOutput/start/Command 52.03
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.75
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.65
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.78
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 41.17
210 TestKicCustomNetwork/use_default_bridge_network 33.46
211 TestKicExistingNetwork 33.78
212 TestKicCustomSubnet 34.64
213 TestKicStaticIP 37.96
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 67.79
218 TestMountStart/serial/StartWithMountFirst 9.5
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 8.87
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.65
223 TestMountStart/serial/VerifyMountPostDelete 0.25
224 TestMountStart/serial/Stop 1.21
225 TestMountStart/serial/RestartStopped 7.39
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 77.08
230 TestMultiNode/serial/DeployApp2Nodes 15.3
231 TestMultiNode/serial/PingHostFrom2Pods 1.04
232 TestMultiNode/serial/AddNode 15.53
233 TestMultiNode/serial/MultiNodeLabels 0.11
234 TestMultiNode/serial/ProfileList 0.71
235 TestMultiNode/serial/CopyFile 10.17
236 TestMultiNode/serial/StopNode 2.27
237 TestMultiNode/serial/StartAfterStop 10.98
238 TestMultiNode/serial/RestartKeepsNodes 87.99
239 TestMultiNode/serial/DeleteNode 5.35
240 TestMultiNode/serial/StopMultiNode 23.92
241 TestMultiNode/serial/RestartMultiNode 63.71
242 TestMultiNode/serial/ValidateNameConflict 32.54
247 TestPreload 119.33
249 TestScheduledStopUnix 106.92
252 TestInsufficientStorage 12.99
253 TestRunningBinaryUpgrade 88.34
255 TestKubernetesUpgrade 351.97
256 TestMissingContainerUpgrade 182.2
258 TestPause/serial/Start 65.1
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
261 TestNoKubernetes/serial/StartWithK8s 42.52
262 TestNoKubernetes/serial/StartWithStopK8s 17.51
263 TestNoKubernetes/serial/Start 5.71
264 TestPause/serial/SecondStartNoReconfiguration 7.45
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
266 TestNoKubernetes/serial/ProfileList 1.31
267 TestNoKubernetes/serial/Stop 1.31
268 TestNoKubernetes/serial/StartNoArgs 7.33
269 TestPause/serial/Pause 0.86
270 TestPause/serial/VerifyStatus 0.39
271 TestPause/serial/Unpause 0.84
272 TestPause/serial/PauseAgain 1
273 TestPause/serial/DeletePaused 2.86
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
275 TestPause/serial/VerifyDeletedResources 0.17
276 TestStoppedBinaryUpgrade/Setup 0.59
277 TestStoppedBinaryUpgrade/Upgrade 98.92
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
293 TestNetworkPlugins/group/false 5.14
298 TestStartStop/group/old-k8s-version/serial/FirstStart 153.56
299 TestStartStop/group/old-k8s-version/serial/DeployApp 10.64
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.67
302 TestStartStop/group/no-preload/serial/FirstStart 68.51
303 TestStartStop/group/old-k8s-version/serial/Stop 13.54
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
306 TestStartStop/group/no-preload/serial/DeployApp 9.4
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
308 TestStartStop/group/no-preload/serial/Stop 12.07
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
310 TestStartStop/group/no-preload/serial/SecondStart 276.42
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/no-preload/serial/Pause 3.08
316 TestStartStop/group/embed-certs/serial/FirstStart 71.25
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
320 TestStartStop/group/old-k8s-version/serial/Pause 3.91
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.35
323 TestStartStop/group/embed-certs/serial/DeployApp 10.38
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
325 TestStartStop/group/embed-certs/serial/Stop 12.31
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.34
329 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
330 TestStartStop/group/embed-certs/serial/SecondStart 290.57
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.34
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 270.97
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.17
338 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/newest-cni/serial/FirstStart 43.64
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
342 TestStartStop/group/embed-certs/serial/Pause 4.25
343 TestNetworkPlugins/group/auto/Start 73.97
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.9
346 TestStartStop/group/newest-cni/serial/Stop 1.37
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.31
348 TestStartStop/group/newest-cni/serial/SecondStart 17.89
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
352 TestStartStop/group/newest-cni/serial/Pause 3.05
353 TestNetworkPlugins/group/kindnet/Start 67.83
354 TestNetworkPlugins/group/auto/KubeletFlags 0.41
355 TestNetworkPlugins/group/auto/NetCatPod 10.37
356 TestNetworkPlugins/group/auto/DNS 0.2
357 TestNetworkPlugins/group/auto/Localhost 0.2
358 TestNetworkPlugins/group/auto/HairPin 0.19
359 TestNetworkPlugins/group/calico/Start 68.59
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
362 TestNetworkPlugins/group/kindnet/NetCatPod 10.42
363 TestNetworkPlugins/group/kindnet/DNS 0.23
364 TestNetworkPlugins/group/kindnet/Localhost 0.21
365 TestNetworkPlugins/group/kindnet/HairPin 0.22
366 TestNetworkPlugins/group/custom-flannel/Start 53.94
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.37
369 TestNetworkPlugins/group/calico/NetCatPod 10.33
370 TestNetworkPlugins/group/calico/DNS 0.27
371 TestNetworkPlugins/group/calico/Localhost 0.24
372 TestNetworkPlugins/group/calico/HairPin 0.24
373 TestNetworkPlugins/group/enable-default-cni/Start 76.16
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.54
376 TestNetworkPlugins/group/custom-flannel/DNS 0.27
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
379 TestNetworkPlugins/group/flannel/Start 51.98
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.47
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
387 TestNetworkPlugins/group/flannel/NetCatPod 11.34
388 TestNetworkPlugins/group/bridge/Start 46.02
389 TestNetworkPlugins/group/flannel/DNS 0.22
390 TestNetworkPlugins/group/flannel/Localhost 0.2
391 TestNetworkPlugins/group/flannel/HairPin 0.21
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
393 TestNetworkPlugins/group/bridge/NetCatPod 9.27
394 TestNetworkPlugins/group/bridge/DNS 0.16
395 TestNetworkPlugins/group/bridge/Localhost 0.15
396 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (6.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-300413 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-300413 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.33304779s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0217 12:31:29.748201 2085373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0217 12:31:29.748293 2085373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-300413
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-300413: exit status 85 (96.328709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-300413 | jenkins | v1.35.0 | 17 Feb 25 12:31 UTC |          |
	|         | -p download-only-300413        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 12:31:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 12:31:23.459585 2085379 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:31:23.459710 2085379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:31:23.459721 2085379 out.go:358] Setting ErrFile to fd 2...
	I0217 12:31:23.459728 2085379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:31:23.460053 2085379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	W0217 12:31:23.460224 2085379 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20427-2080001/.minikube/config/config.json: open /home/jenkins/minikube-integration/20427-2080001/.minikube/config/config.json: no such file or directory
	I0217 12:31:23.460668 2085379 out.go:352] Setting JSON to true
	I0217 12:31:23.461540 2085379 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":306647,"bootTime":1739488837,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 12:31:23.461643 2085379 start.go:139] virtualization:  
	I0217 12:31:23.465967 2085379 out.go:97] [download-only-300413] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0217 12:31:23.466153 2085379 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball: no such file or directory
	I0217 12:31:23.466208 2085379 notify.go:220] Checking for updates...
	I0217 12:31:23.469117 2085379 out.go:169] MINIKUBE_LOCATION=20427
	I0217 12:31:23.472011 2085379 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:31:23.474868 2085379 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 12:31:23.477616 2085379 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 12:31:23.480515 2085379 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0217 12:31:23.486231 2085379 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0217 12:31:23.486502 2085379 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:31:23.511300 2085379 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:31:23.511406 2085379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:31:23.565749 2085379 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:31:23.556467667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:31:23.565857 2085379 docker.go:318] overlay module found
	I0217 12:31:23.568803 2085379 out.go:97] Using the docker driver based on user configuration
	I0217 12:31:23.568828 2085379 start.go:297] selected driver: docker
	I0217 12:31:23.568836 2085379 start.go:901] validating driver "docker" against <nil>
	I0217 12:31:23.568949 2085379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:31:23.618969 2085379 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:31:23.610403331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:31:23.619170 2085379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 12:31:23.619453 2085379 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0217 12:31:23.619601 2085379 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0217 12:31:23.622813 2085379 out.go:169] Using Docker driver with root privileges
	I0217 12:31:23.625674 2085379 cni.go:84] Creating CNI manager for ""
	I0217 12:31:23.625738 2085379 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0217 12:31:23.625751 2085379 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0217 12:31:23.625826 2085379 start.go:340] cluster config:
	{Name:download-only-300413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-300413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:31:23.628797 2085379 out.go:97] Starting "download-only-300413" primary control-plane node in "download-only-300413" cluster
	I0217 12:31:23.628821 2085379 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0217 12:31:23.631648 2085379 out.go:97] Pulling base image v0.0.46-1739182054-20387 ...
	I0217 12:31:23.631678 2085379 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0217 12:31:23.631847 2085379 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local docker daemon
	I0217 12:31:23.647485 2085379 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0217 12:31:23.647696 2085379 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad in local cache directory
	I0217 12:31:23.647802 2085379 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad to local cache
	I0217 12:31:23.687582 2085379 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0217 12:31:23.687620 2085379 cache.go:56] Caching tarball of preloaded images
	I0217 12:31:23.688358 2085379 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0217 12:31:23.691642 2085379 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0217 12:31:23.691666 2085379 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0217 12:31:23.773417 2085379 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0217 12:31:27.279297 2085379 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0217 12:31:27.279490 2085379 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-300413 host does not exist
	  To start a cluster, run: "minikube start -p download-only-300413"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-300413
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-481409 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-481409 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.6802544s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0217 12:31:34.919062 2085373 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0217 12:31:34.919103 2085373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20427-2080001/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-481409
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-481409: exit status 85 (98.192579ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-300413 | jenkins | v1.35.0 | 17 Feb 25 12:31 UTC |                     |
	|         | -p download-only-300413        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Feb 25 12:31 UTC | 17 Feb 25 12:31 UTC |
	| delete  | -p download-only-300413        | download-only-300413 | jenkins | v1.35.0 | 17 Feb 25 12:31 UTC | 17 Feb 25 12:31 UTC |
	| start   | -o=json --download-only        | download-only-481409 | jenkins | v1.35.0 | 17 Feb 25 12:31 UTC |                     |
	|         | -p download-only-481409        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/17 12:31:30
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0217 12:31:30.288767 2085577 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:31:30.288963 2085577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:31:30.288991 2085577 out.go:358] Setting ErrFile to fd 2...
	I0217 12:31:30.289012 2085577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:31:30.289273 2085577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 12:31:30.289749 2085577 out.go:352] Setting JSON to true
	I0217 12:31:30.290681 2085577 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":306654,"bootTime":1739488837,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 12:31:30.290785 2085577 start.go:139] virtualization:  
	I0217 12:31:30.294491 2085577 out.go:97] [download-only-481409] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 12:31:30.294786 2085577 notify.go:220] Checking for updates...
	I0217 12:31:30.297848 2085577 out.go:169] MINIKUBE_LOCATION=20427
	I0217 12:31:30.300959 2085577 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:31:30.303918 2085577 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 12:31:30.306967 2085577 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 12:31:30.309803 2085577 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0217 12:31:30.315331 2085577 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0217 12:31:30.315578 2085577 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:31:30.339272 2085577 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:31:30.339378 2085577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:31:30.403697 2085577 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-17 12:31:30.394859963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:31:30.403810 2085577 docker.go:318] overlay module found
	I0217 12:31:30.406845 2085577 out.go:97] Using the docker driver based on user configuration
	I0217 12:31:30.406904 2085577 start.go:297] selected driver: docker
	I0217 12:31:30.406916 2085577 start.go:901] validating driver "docker" against <nil>
	I0217 12:31:30.407020 2085577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:31:30.458923 2085577 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-17 12:31:30.450540386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:31:30.459131 2085577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0217 12:31:30.459416 2085577 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0217 12:31:30.459565 2085577 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0217 12:31:30.462765 2085577 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-481409 host does not exist
	  To start a cluster, run: "minikube start -p download-only-481409"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-481409
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0217 12:31:36.253202 2085373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-293329 --alsologtostderr --binary-mirror http://127.0.0.1:45313 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-293329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-293329
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-767669
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-767669: exit status 85 (79.808578ms)

                                                
                                                
-- stdout --
	* Profile "addons-767669" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-767669"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-767669
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-767669: exit status 85 (76.854054ms)

                                                
                                                
-- stdout --
	* Profile "addons-767669" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-767669"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (154.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-767669 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-767669 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.08300639s)
--- PASS: TestAddons/Setup (154.08s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.11s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 61.145198ms
addons_test.go:807: volcano-scheduler stabilized in 62.052531ms
addons_test.go:815: volcano-admission stabilized in 62.808466ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-k2czz" [2fcadc59-5ded-4f3e-944f-4953c508974d] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004016577s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-grghz" [206dcfc4-b911-488b-ad4d-ccb4df239e25] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003276194s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-qtkl9" [6d77d389-dd6e-4343-97e5-8ea73f8f5af5] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003134924s
addons_test.go:842: (dbg) Run:  kubectl --context addons-767669 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-767669 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-767669 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [01ac8fbc-0f9b-4be7-ba73-b83f5b76b429] Pending
helpers_test.go:344: "test-job-nginx-0" [01ac8fbc-0f9b-4be7-ba73-b83f5b76b429] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [01ac8fbc-0f9b-4be7-ba73-b83f5b76b429] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003623086s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable volcano --alsologtostderr -v=1: (11.460617439s)
--- PASS: TestAddons/serial/Volcano (40.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-767669 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-767669 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-767669 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-767669 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [745647a3-8879-4969-bb6a-019e40a3745e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [745647a3-8879-4969-bb6a-019e40a3745e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004013182s
addons_test.go:633: (dbg) Run:  kubectl --context addons-767669 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-767669 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-767669 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-767669 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.303287ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-m8xq7" [ac89fd7e-fdf7-4093-a4cd-14ac27c554f3] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003314993s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l2mgn" [d53f76ce-42ee-4fa7-8c9a-72009e646fa3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003507686s
addons_test.go:331: (dbg) Run:  kubectl --context addons-767669 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-767669 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-767669 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.833986279s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 ip
2025/02/17 12:35:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-767669 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-767669 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-767669 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1f7b823a-5e48-4579-be66-798dbbebbdfb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1f7b823a-5e48-4579-be66-798dbbebbdfb] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.005691861s
I0217 12:36:45.154208 2085373 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-767669 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable ingress-dns --alsologtostderr -v=1: (1.546216266s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable ingress --alsologtostderr -v=1: (7.787916079s)
--- PASS: TestAddons/parallel/Ingress (19.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.15s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c52vg" [ffe42608-2661-45f8-95fc-2112e0211fe1] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003462077s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable inspektor-gadget --alsologtostderr -v=1: (6.147201432s)
--- PASS: TestAddons/parallel/InspektorGadget (11.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.167368ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-7b844" [9fc07ef4-b4ce-4dcf-a2df-c1f3b9880652] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00405962s
addons_test.go:402: (dbg) Run:  kubectl --context addons-767669 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0217 12:35:48.688706 2085373 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0217 12:35:48.692219 2085373 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0217 12:35:48.692245 2085373 kapi.go:107] duration metric: took 6.382924ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.39254ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-767669 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-767669 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [652db167-c628-40d3-af4b-4ff5147f2a30] Pending
helpers_test.go:344: "task-pv-pod" [652db167-c628-40d3-af4b-4ff5147f2a30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [652db167-c628-40d3-af4b-4ff5147f2a30] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003476262s
addons_test.go:511: (dbg) Run:  kubectl --context addons-767669 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-767669 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-767669 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-767669 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-767669 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-767669 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-767669 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2268bf54-952b-46e1-9b1c-be739c2f3bca] Pending
helpers_test.go:344: "task-pv-pod-restore" [2268bf54-952b-46e1-9b1c-be739c2f3bca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2268bf54-952b-46e1-9b1c-be739c2f3bca] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00308301s
addons_test.go:553: (dbg) Run:  kubectl --context addons-767669 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-767669 delete pod task-pv-pod-restore: (1.138356279s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-767669 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-767669 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.904054388s)
--- PASS: TestAddons/parallel/CSI (53.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-767669 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-7swpq" [0b1be073-198f-4cea-afc1-c18ed178913a] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-7swpq" [0b1be073-198f-4cea-afc1-c18ed178913a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-7swpq" [0b1be073-198f-4cea-afc1-c18ed178913a] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003631775s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable headlamp --alsologtostderr -v=1: (6.138135372s)
--- PASS: TestAddons/parallel/Headlamp (17.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-45gvv" [3ffd5a79-6aa6-4649-add0-8c6a61ce9f6d] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003540165s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-767669 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-767669 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-767669 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d6a3504e-3752-483e-9521-fdc35a9e7abe] Pending
helpers_test.go:344: "test-local-path" [d6a3504e-3752-483e-9521-fdc35a9e7abe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d6a3504e-3752-483e-9521-fdc35a9e7abe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003744537s
addons_test.go:906: (dbg) Run:  kubectl --context addons-767669 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 ssh "cat /opt/local-path-provisioner/pvc-aba112f1-f40f-4457-897e-1be0e6539b0b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-767669 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-767669 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.797469131s)
--- PASS: TestAddons/parallel/LocalPath (52.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lrfs8" [ecbbd061-5360-416f-8edc-baabce7da422] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002906484s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-g995w" [6a7a40b4-93e3-4d51-a2f5-0b134c9ba6ca] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002758271s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-767669 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-767669 addons disable yakd --alsologtostderr -v=1: (5.822818714s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-767669
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-767669: (11.954798198s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-767669
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-767669
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-767669
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (38.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-592751 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-592751 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.42663846s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-592751 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-592751 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-592751 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-592751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-592751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-592751: (2.015752442s)
--- PASS: TestCertOptions (38.10s)

                                                
                                    
x
+
TestCertExpiration (230.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-717393 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0217 13:14:11.061824 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-717393 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.382628616s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-717393 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-717393 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.329960903s)
helpers_test.go:175: Cleaning up "cert-expiration-717393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-717393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-717393: (2.510635802s)
--- PASS: TestCertExpiration (230.22s)

                                                
                                    
x
+
TestForceSystemdFlag (51.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-789189 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-789189 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.785651307s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-789189 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-789189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-789189
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-789189: (2.291069425s)
--- PASS: TestForceSystemdFlag (51.51s)

                                                
                                    
x
+
TestForceSystemdEnv (40.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-461736 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-461736 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.68966934s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-461736 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-461736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-461736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-461736: (2.342621478s)
--- PASS: TestForceSystemdEnv (40.45s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.59s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-271516 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-271516 --driver=docker  --container-runtime=containerd: (30.594631461s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-271516"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-271516": (1.004432058s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OWwTLYxvMFWY/agent.2106051" SSH_AGENT_PID="2106052" DOCKER_HOST=ssh://docker@127.0.0.1:49777 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OWwTLYxvMFWY/agent.2106051" SSH_AGENT_PID="2106052" DOCKER_HOST=ssh://docker@127.0.0.1:49777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OWwTLYxvMFWY/agent.2106051" SSH_AGENT_PID="2106052" DOCKER_HOST=ssh://docker@127.0.0.1:49777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.570126112s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-OWwTLYxvMFWY/agent.2106051" SSH_AGENT_PID="2106052" DOCKER_HOST=ssh://docker@127.0.0.1:49777 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-271516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-271516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-271516: (2.059646527s)
--- PASS: TestDockerEnvContainerd (46.59s)

                                                
                                    
x
+
TestErrorSpam/setup (30.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-106240 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-106240 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-106240 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-106240 --driver=docker  --container-runtime=containerd: (30.692172474s)
--- PASS: TestErrorSpam/setup (30.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 unpause
--- PASS: TestErrorSpam/unpause (1.97s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 stop: (1.290249253s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-106240 --log_dir /tmp/nospam-106240 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20427-2080001/.minikube/files/etc/test/nested/copy/2085373/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082454 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0217 12:39:11.056978 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.063406 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.074924 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.096394 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.137896 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.219369 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.381433 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:11.702857 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:12.344601 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:13.625946 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:16.187875 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:21.309628 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:39:31.551523 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-082454 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.731881856s)
--- PASS: TestFunctional/serial/StartWithProxy (48.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0217 12:39:32.341414 2085373 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082454 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-082454 --alsologtostderr -v=8: (5.802221584s)
functional_test.go:680: soft start took 5.804100041s for "functional-082454" cluster.
I0217 12:39:38.143963 2085373 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (5.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-082454 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 cache add registry.k8s.io/pause:3.1: (1.469725973s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 cache add registry.k8s.io/pause:3.3: (1.40811441s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 cache add registry.k8s.io/pause:latest: (1.256508449s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-082454 /tmp/TestFunctionalserialCacheCmdcacheadd_local2112681343/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cache add minikube-local-cache-test:functional-082454
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cache delete minikube-local-cache-test:functional-082454
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-082454
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.067017ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 cache reload: (1.141806703s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 kubectl -- --context functional-082454 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-082454 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082454 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0217 12:39:52.032917 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-082454 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.923326457s)
functional_test.go:778: restart took 45.92342366s for "functional-082454" cluster.
I0217 12:40:32.636056 2085373 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (45.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-082454 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 logs
E0217 12:40:32.997825 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 logs: (1.778504871s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 logs --file /tmp/TestFunctionalserialLogsFileCmd4106322818/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 logs --file /tmp/TestFunctionalserialLogsFileCmd4106322818/001/logs.txt: (1.815597692s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-082454 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-082454
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-082454: exit status 115 (764.989802ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31434 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-082454 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 config get cpus: exit status 14 (85.690232ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 config get cpus: exit status 14 (71.459913ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-082454 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-082454 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2121271: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.87s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082454 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-082454 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (202.520043ms)

                                                
                                                
-- stdout --
	* [functional-082454] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:41:14.326441 2120963 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:41:14.327048 2120963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:41:14.327085 2120963 out.go:358] Setting ErrFile to fd 2...
	I0217 12:41:14.327106 2120963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:41:14.327402 2120963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 12:41:14.327829 2120963 out.go:352] Setting JSON to false
	I0217 12:41:14.328830 2120963 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":307238,"bootTime":1739488837,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 12:41:14.328929 2120963 start.go:139] virtualization:  
	I0217 12:41:14.332497 2120963 out.go:177] * [functional-082454] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 12:41:14.336189 2120963 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 12:41:14.336305 2120963 notify.go:220] Checking for updates...
	I0217 12:41:14.343199 2120963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:41:14.346155 2120963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 12:41:14.348954 2120963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 12:41:14.351890 2120963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 12:41:14.354701 2120963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 12:41:14.358299 2120963 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 12:41:14.358969 2120963 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:41:14.390787 2120963 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:41:14.390896 2120963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:41:14.446400 2120963 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:41:14.437642504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:41:14.446509 2120963 docker.go:318] overlay module found
	I0217 12:41:14.449550 2120963 out.go:177] * Using the docker driver based on existing profile
	I0217 12:41:14.453394 2120963 start.go:297] selected driver: docker
	I0217 12:41:14.453415 2120963 start.go:901] validating driver "docker" against &{Name:functional-082454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-082454 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:41:14.453529 2120963 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 12:41:14.457037 2120963 out.go:201] 
	W0217 12:41:14.459912 2120963 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0217 12:41:14.462734 2120963 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082454 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082454 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-082454 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (260.935364ms)

                                                
                                                
-- stdout --
	* [functional-082454] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:41:14.105887 2120860 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:41:14.106102 2120860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:41:14.106116 2120860 out.go:358] Setting ErrFile to fd 2...
	I0217 12:41:14.106125 2120860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:41:14.107306 2120860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 12:41:14.107752 2120860 out.go:352] Setting JSON to false
	I0217 12:41:14.108824 2120860 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":307237,"bootTime":1739488837,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 12:41:14.108921 2120860 start.go:139] virtualization:  
	I0217 12:41:14.113538 2120860 out.go:177] * [functional-082454] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0217 12:41:14.117434 2120860 notify.go:220] Checking for updates...
	I0217 12:41:14.120331 2120860 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 12:41:14.123237 2120860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 12:41:14.126114 2120860 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 12:41:14.129045 2120860 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 12:41:14.132285 2120860 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 12:41:14.135233 2120860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 12:41:14.138665 2120860 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 12:41:14.139188 2120860 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 12:41:14.176301 2120860 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 12:41:14.176421 2120860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:41:14.240280 2120860 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 12:41:14.226999054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:41:14.240400 2120860 docker.go:318] overlay module found
	I0217 12:41:14.247389 2120860 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0217 12:41:14.250247 2120860 start.go:297] selected driver: docker
	I0217 12:41:14.250276 2120860 start.go:901] validating driver "docker" against &{Name:functional-082454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-082454 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0217 12:41:14.250405 2120860 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 12:41:14.254138 2120860 out.go:201] 
	W0217 12:41:14.257021 2120860 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0217 12:41:14.259905 2120860 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-082454 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-082454 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-sxs88" [d6c13d1f-97af-45ae-b664-fe61fb2e5a0a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-sxs88" [d6c13d1f-97af-45ae-b664-fe61fb2e5a0a] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.006269963s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31291
functional_test.go:1692: http://192.168.49.2:31291: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-sxs88

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31291
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [314dd53f-d9f3-4cf1-b83e-909d1b352045] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003993896s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-082454 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-082454 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-082454 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-082454 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd866df6-eaab-440c-aea4-9616d9ef8b2d] Pending
helpers_test.go:344: "sp-pod" [dd866df6-eaab-440c-aea4-9616d9ef8b2d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd866df6-eaab-440c-aea4-9616d9ef8b2d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003633859s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-082454 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-082454 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-082454 delete -f testdata/storage-provisioner/pod.yaml: (1.83121617s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-082454 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b679692c-fd38-4054-a2aa-c59bc6cebe1e] Pending
helpers_test.go:344: "sp-pod" [b679692c-fd38-4054-a2aa-c59bc6cebe1e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003899166s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-082454 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh -n functional-082454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cp functional-082454:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1824027962/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh -n functional-082454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh -n functional-082454 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/2085373/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /etc/test/nested/copy/2085373/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/2085373.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /etc/ssl/certs/2085373.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/2085373.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /usr/share/ca-certificates/2085373.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/20853732.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /etc/ssl/certs/20853732.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/20853732.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /usr/share/ca-certificates/20853732.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-082454 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh "sudo systemctl is-active docker": exit status 1 (315.260797ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh "sudo systemctl is-active crio": exit status 1 (352.407089ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082454 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082454 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-082454 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2118288: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-082454 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082454 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-082454 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ba2e07f2-4bbb-4358-a2cc-e9bab5ec7ce4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ba2e07f2-4bbb-4358-a2cc-e9bab5ec7ce4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00403157s
I0217 12:40:51.718041 2085373 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-082454 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.179.126 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-082454 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-082454 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-082454 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-6h4k8" [974071cb-d05d-47c8-9f8b-fefb598c22b1] Pending
helpers_test.go:344: "hello-node-64fc58db8c-6h4k8" [974071cb-d05d-47c8-9f8b-fefb598c22b1] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005001928s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "469.879659ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "90.75524ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 service list -o json
functional_test.go:1511: Took "616.20455ms" to run "out/minikube-linux-arm64 -p functional-082454 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "446.795474ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "106.107375ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31948
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdany-port966780516/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739796071417985417" to /tmp/TestFunctionalparallelMountCmdany-port966780516/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739796071417985417" to /tmp/TestFunctionalparallelMountCmdany-port966780516/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739796071417985417" to /tmp/TestFunctionalparallelMountCmdany-port966780516/001/test-1739796071417985417
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.073326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 12:41:11.869750 2085373 retry.go:31] will retry after 434.365188ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 17 12:41 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 17 12:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 17 12:41 test-1739796071417985417
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh cat /mount-9p/test-1739796071417985417
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-082454 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [89820f01-3982-45f0-893b-24592f9c0e52] Pending
helpers_test.go:344: "busybox-mount" [89820f01-3982-45f0-893b-24592f9c0e52] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [89820f01-3982-45f0-893b-24592f9c0e52] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003461775s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-082454 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdany-port966780516/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31948
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdspecific-port3573806096/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.18929ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 12:41:19.216032 2085373 retry.go:31] will retry after 264.482801ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdspecific-port3573806096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh "sudo umount -f /mount-9p": exit status 1 (300.517975ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-082454 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdspecific-port3573806096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122272758/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122272758/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122272758/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T" /mount1: exit status 1 (583.975938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0217 12:41:21.215895 2085373 retry.go:31] will retry after 716.33663ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-082454 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122272758/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122272758/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082454 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2122272758/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 version -o=json --components: (1.416402072s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082454 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-082454
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-082454
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082454 image ls --format short --alsologtostderr:
I0217 12:41:31.924212 2123843 out.go:345] Setting OutFile to fd 1 ...
I0217 12:41:31.924395 2123843 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:31.924422 2123843 out.go:358] Setting ErrFile to fd 2...
I0217 12:41:31.924441 2123843 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:31.924732 2123843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 12:41:31.926394 2123843 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:31.926598 2123843 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:31.927121 2123843 cli_runner.go:164] Run: docker container inspect functional-082454 --format={{.State.Status}}
I0217 12:41:31.952117 2123843 ssh_runner.go:195] Run: systemctl --version
I0217 12:41:31.952168 2123843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082454
I0217 12:41:31.971427 2123843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49787 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/functional-082454/id_rsa Username:docker}
I0217 12:41:32.070528 2123843 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082454 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:9b1b7b | 68.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:ddb38c | 18.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| docker.io/library/minikube-local-cache-test | functional-082454  | sha256:0678c6 | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:cedb66 | 21.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e124fb | 27.4MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20250214-acbabc1a | sha256:ee75e2 | 35.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:265c2d | 26.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kicbase/echo-server               | functional-082454  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:293376 | 24MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082454 image ls --format table --alsologtostderr:
I0217 12:41:33.062180 2124109 out.go:345] Setting OutFile to fd 1 ...
I0217 12:41:33.062336 2124109 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:33.062348 2124109 out.go:358] Setting ErrFile to fd 2...
I0217 12:41:33.062354 2124109 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:33.063053 2124109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 12:41:33.063821 2124109 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:33.063953 2124109 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:33.064445 2124109 cli_runner.go:164] Run: docker container inspect functional-082454 --format={{.State.Status}}
I0217 12:41:33.089943 2124109 ssh_runner.go:195] Run: systemctl --version
I0217 12:41:33.090000 2124109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082454
I0217 12:41:33.117114 2124109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49787 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/functional-082454/id_rsa Username:docker}
I0217 12:41:33.210964 2124109 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082454 image ls --format json --alsologtostderr:
[{"id":"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"26217748"},{"id":"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"23968433"},{"id":"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"27363416"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae81
99917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"18922457"},{"id":"sha256
:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-082454"],"size":"2173567"},{"id":"sha256:0678c69a4d3b2e0016e278babcb3aee04e6bc9125395c62ec99ab1551bd67fee","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-082454"],"size":"991"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21684747"},{"id":"sha256:9b1b7be
1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58","repoDigests":["docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"68631146"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"siz
e":"67941650"},{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"35677907"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082454 image ls --format json --alsologtostderr:
I0217 12:41:32.763829 2124021 out.go:345] Setting OutFile to fd 1 ...
I0217 12:41:32.765407 2124021 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:32.765426 2124021 out.go:358] Setting ErrFile to fd 2...
I0217 12:41:32.765432 2124021 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:32.765825 2124021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 12:41:32.766707 2124021 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:32.766834 2124021 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:32.767452 2124021 cli_runner.go:164] Run: docker container inspect functional-082454 --format={{.State.Status}}
I0217 12:41:32.795322 2124021 ssh_runner.go:195] Run: systemctl --version
I0217 12:41:32.795379 2124021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082454
I0217 12:41:32.820256 2124021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49787 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/functional-082454/id_rsa Username:docker}
I0217 12:41:32.915077 2124021 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082454 image ls --format yaml --alsologtostderr:
- id: sha256:0678c69a4d3b2e0016e278babcb3aee04e6bc9125395c62ec99ab1551bd67fee
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-082454
size: "991"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-082454
size: "2173567"
- id: sha256:cedb667e1a7b4e6d843a4f74f1f2db0dac1c29b43978aa72dbae2193e3b8eea3
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
repoTags:
- docker.io/library/nginx:alpine
size: "21684747"
- id: sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "23968433"
- id: sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "27363416"
- id: sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "18922457"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:ee75e27fff91c8d59835f9a3efdf968ff404e580bad69746a65bcf3e304ab26f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "35677907"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "26217748"
- id: sha256:9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58
repoDigests:
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "68631146"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082454 image ls --format yaml --alsologtostderr:
I0217 12:41:32.172024 2123891 out.go:345] Setting OutFile to fd 1 ...
I0217 12:41:32.172355 2123891 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:32.172392 2123891 out.go:358] Setting ErrFile to fd 2...
I0217 12:41:32.172415 2123891 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:32.172887 2123891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 12:41:32.174296 2123891 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:32.174533 2123891 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:32.175514 2123891 cli_runner.go:164] Run: docker container inspect functional-082454 --format={{.State.Status}}
I0217 12:41:32.196683 2123891 ssh_runner.go:195] Run: systemctl --version
I0217 12:41:32.196744 2123891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082454
I0217 12:41:32.214957 2123891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49787 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/functional-082454/id_rsa Username:docker}
I0217 12:41:32.306040 2123891 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082454 ssh pgrep buildkitd: exit status 1 (308.687822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image build -t localhost/my-image:functional-082454 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 image build -t localhost/my-image:functional-082454 testdata/build --alsologtostderr: (3.331249189s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082454 image build -t localhost/my-image:functional-082454 testdata/build --alsologtostderr:
I0217 12:41:32.740581 2124017 out.go:345] Setting OutFile to fd 1 ...
I0217 12:41:32.742056 2124017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:32.742105 2124017 out.go:358] Setting ErrFile to fd 2...
I0217 12:41:32.742128 2124017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0217 12:41:32.742429 2124017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
I0217 12:41:32.743147 2124017 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:32.745285 2124017 config.go:182] Loaded profile config "functional-082454": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0217 12:41:32.745829 2124017 cli_runner.go:164] Run: docker container inspect functional-082454 --format={{.State.Status}}
I0217 12:41:32.769442 2124017 ssh_runner.go:195] Run: systemctl --version
I0217 12:41:32.769488 2124017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082454
I0217 12:41:32.792013 2124017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49787 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/functional-082454/id_rsa Username:docker}
I0217 12:41:32.889865 2124017 build_images.go:161] Building image from path: /tmp/build.1848856965.tar
I0217 12:41:32.889989 2124017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0217 12:41:32.899005 2124017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1848856965.tar
I0217 12:41:32.902475 2124017 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1848856965.tar: stat -c "%s %y" /var/lib/minikube/build/build.1848856965.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1848856965.tar': No such file or directory
I0217 12:41:32.902507 2124017 ssh_runner.go:362] scp /tmp/build.1848856965.tar --> /var/lib/minikube/build/build.1848856965.tar (3072 bytes)
I0217 12:41:32.930009 2124017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1848856965
I0217 12:41:32.939222 2124017 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1848856965 -xf /var/lib/minikube/build/build.1848856965.tar
I0217 12:41:32.948515 2124017 containerd.go:394] Building image: /var/lib/minikube/build/build.1848856965
I0217 12:41:32.948595 2124017 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1848856965 --local dockerfile=/var/lib/minikube/build/build.1848856965 --output type=image,name=localhost/my-image:functional-082454
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:fe6bf55feda09385c39f5ac3b10b718552a6296b99989c592f9afe0142c6cf73
#8 exporting manifest sha256:fe6bf55feda09385c39f5ac3b10b718552a6296b99989c592f9afe0142c6cf73 0.0s done
#8 exporting config sha256:f43c923f313c32bab8e6858b632b7e78318c913c790459a627aa80bad3762343 0.0s done
#8 naming to localhost/my-image:functional-082454 done
#8 DONE 0.2s
I0217 12:41:35.967484 2124017 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1848856965 --local dockerfile=/var/lib/minikube/build/build.1848856965 --output type=image,name=localhost/my-image:functional-082454: (3.018849873s)
I0217 12:41:35.967552 2124017 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1848856965
I0217 12:41:35.977106 2124017 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1848856965.tar
I0217 12:41:35.986543 2124017 build_images.go:217] Built localhost/my-image:functional-082454 from /tmp/build.1848856965.tar
I0217 12:41:35.986573 2124017 build_images.go:133] succeeded building to: functional-082454
I0217 12:41:35.986578 2124017 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-082454
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image load --daemon kicbase/echo-server:functional-082454 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-082454 image load --daemon kicbase/echo-server:functional-082454 --alsologtostderr: (1.127002759s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image load --daemon kicbase/echo-server:functional-082454 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-082454
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image load --daemon kicbase/echo-server:functional-082454 --alsologtostderr
2025/02/17 12:41:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image save kicbase/echo-server:functional-082454 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image rm kicbase/echo-server:functional-082454 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-082454
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-082454 image save --daemon kicbase/echo-server:functional-082454 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-082454
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-082454
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-082454
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-082454
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-477073 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0217 12:41:54.919529 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-477073 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.527081635s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (34.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-477073 -- rollout status deployment/busybox: (30.922576599s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-54z9c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-b89mf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-rx6xb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-54z9c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-b89mf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-rx6xb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-54z9c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-b89mf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-rx6xb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (34.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-54z9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-54z9c -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-b89mf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-b89mf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-rx6xb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0217 12:44:11.054960 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-477073 -- exec busybox-58667487b6-rx6xb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-477073 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-477073 -v=7 --alsologtostderr: (22.040110241s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr: (1.035479421s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-477073 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.038173048s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp testdata/cp-test.txt ha-477073:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2206154527/001/cp-test_ha-477073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073:/home/docker/cp-test.txt ha-477073-m02:/home/docker/cp-test_ha-477073_ha-477073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test_ha-477073_ha-477073-m02.txt"
E0217 12:44:38.761508 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073:/home/docker/cp-test.txt ha-477073-m03:/home/docker/cp-test_ha-477073_ha-477073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test_ha-477073_ha-477073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073:/home/docker/cp-test.txt ha-477073-m04:/home/docker/cp-test_ha-477073_ha-477073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test_ha-477073_ha-477073-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp testdata/cp-test.txt ha-477073-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2206154527/001/cp-test_ha-477073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m02:/home/docker/cp-test.txt ha-477073:/home/docker/cp-test_ha-477073-m02_ha-477073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test_ha-477073-m02_ha-477073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m02:/home/docker/cp-test.txt ha-477073-m03:/home/docker/cp-test_ha-477073-m02_ha-477073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test_ha-477073-m02_ha-477073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m02:/home/docker/cp-test.txt ha-477073-m04:/home/docker/cp-test_ha-477073-m02_ha-477073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test_ha-477073-m02_ha-477073-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp testdata/cp-test.txt ha-477073-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2206154527/001/cp-test_ha-477073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m03:/home/docker/cp-test.txt ha-477073:/home/docker/cp-test_ha-477073-m03_ha-477073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test_ha-477073-m03_ha-477073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m03:/home/docker/cp-test.txt ha-477073-m02:/home/docker/cp-test_ha-477073-m03_ha-477073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test_ha-477073-m03_ha-477073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m03:/home/docker/cp-test.txt ha-477073-m04:/home/docker/cp-test_ha-477073-m03_ha-477073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test_ha-477073-m03_ha-477073-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp testdata/cp-test.txt ha-477073-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2206154527/001/cp-test_ha-477073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m04:/home/docker/cp-test.txt ha-477073:/home/docker/cp-test_ha-477073-m04_ha-477073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073 "sudo cat /home/docker/cp-test_ha-477073-m04_ha-477073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m04:/home/docker/cp-test.txt ha-477073-m02:/home/docker/cp-test_ha-477073-m04_ha-477073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m02 "sudo cat /home/docker/cp-test_ha-477073-m04_ha-477073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 cp ha-477073-m04:/home/docker/cp-test.txt ha-477073-m03:/home/docker/cp-test_ha-477073-m04_ha-477073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 ssh -n ha-477073-m03 "sudo cat /home/docker/cp-test_ha-477073-m04_ha-477073-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 node stop m02 -v=7 --alsologtostderr: (12.054329769s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr: exit status 7 (768.77058ms)

                                                
                                                
-- stdout --
	ha-477073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-477073-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-477073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-477073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:45:07.226957 2140537 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:45:07.227148 2140537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:45:07.227179 2140537 out.go:358] Setting ErrFile to fd 2...
	I0217 12:45:07.227197 2140537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:45:07.227512 2140537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 12:45:07.227745 2140537 out.go:352] Setting JSON to false
	I0217 12:45:07.227831 2140537 mustload.go:65] Loading cluster: ha-477073
	I0217 12:45:07.227898 2140537 notify.go:220] Checking for updates...
	I0217 12:45:07.228383 2140537 config.go:182] Loaded profile config "ha-477073": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 12:45:07.228424 2140537 status.go:174] checking status of ha-477073 ...
	I0217 12:45:07.228991 2140537 cli_runner.go:164] Run: docker container inspect ha-477073 --format={{.State.Status}}
	I0217 12:45:07.249932 2140537 status.go:371] ha-477073 host status = "Running" (err=<nil>)
	I0217 12:45:07.249954 2140537 host.go:66] Checking if "ha-477073" exists ...
	I0217 12:45:07.250252 2140537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-477073
	I0217 12:45:07.285520 2140537 host.go:66] Checking if "ha-477073" exists ...
	I0217 12:45:07.285881 2140537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:45:07.285929 2140537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-477073
	I0217 12:45:07.306162 2140537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49792 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/ha-477073/id_rsa Username:docker}
	I0217 12:45:07.403820 2140537 ssh_runner.go:195] Run: systemctl --version
	I0217 12:45:07.413653 2140537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:45:07.429320 2140537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:45:07.490828 2140537 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-17 12:45:07.479553188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:45:07.491425 2140537 kubeconfig.go:125] found "ha-477073" server: "https://192.168.49.254:8443"
	I0217 12:45:07.491469 2140537 api_server.go:166] Checking apiserver status ...
	I0217 12:45:07.491581 2140537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:45:07.502753 2140537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup
	I0217 12:45:07.512386 2140537 api_server.go:182] apiserver freezer: "7:freezer:/docker/309067446e5abb4181f9be59dec3cd0156a67a0521ccef49a3bcd60b123b906e/kubepods/burstable/pod65f9adfba8feea73c78d1728cf534b60/9fe0cb47523ce6f038a9a9e9a5d6e4920d78d9627928c464f3a49eef9c4f4b3c"
	I0217 12:45:07.512460 2140537 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/309067446e5abb4181f9be59dec3cd0156a67a0521ccef49a3bcd60b123b906e/kubepods/burstable/pod65f9adfba8feea73c78d1728cf534b60/9fe0cb47523ce6f038a9a9e9a5d6e4920d78d9627928c464f3a49eef9c4f4b3c/freezer.state
	I0217 12:45:07.522223 2140537 api_server.go:204] freezer state: "THAWED"
	I0217 12:45:07.522263 2140537 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0217 12:45:07.530502 2140537 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0217 12:45:07.530528 2140537 status.go:463] ha-477073 apiserver status = Running (err=<nil>)
	I0217 12:45:07.530539 2140537 status.go:176] ha-477073 status: &{Name:ha-477073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:45:07.530579 2140537 status.go:174] checking status of ha-477073-m02 ...
	I0217 12:45:07.530915 2140537 cli_runner.go:164] Run: docker container inspect ha-477073-m02 --format={{.State.Status}}
	I0217 12:45:07.551669 2140537 status.go:371] ha-477073-m02 host status = "Stopped" (err=<nil>)
	I0217 12:45:07.551698 2140537 status.go:384] host is not running, skipping remaining checks
	I0217 12:45:07.551706 2140537 status.go:176] ha-477073-m02 status: &{Name:ha-477073-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:45:07.551726 2140537 status.go:174] checking status of ha-477073-m03 ...
	I0217 12:45:07.552038 2140537 cli_runner.go:164] Run: docker container inspect ha-477073-m03 --format={{.State.Status}}
	I0217 12:45:07.570926 2140537 status.go:371] ha-477073-m03 host status = "Running" (err=<nil>)
	I0217 12:45:07.570953 2140537 host.go:66] Checking if "ha-477073-m03" exists ...
	I0217 12:45:07.571272 2140537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-477073-m03
	I0217 12:45:07.589220 2140537 host.go:66] Checking if "ha-477073-m03" exists ...
	I0217 12:45:07.589714 2140537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:45:07.589779 2140537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-477073-m03
	I0217 12:45:07.607773 2140537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49802 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/ha-477073-m03/id_rsa Username:docker}
	I0217 12:45:07.699627 2140537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:45:07.713636 2140537 kubeconfig.go:125] found "ha-477073" server: "https://192.168.49.254:8443"
	I0217 12:45:07.713764 2140537 api_server.go:166] Checking apiserver status ...
	I0217 12:45:07.713862 2140537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:45:07.728209 2140537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1307/cgroup
	I0217 12:45:07.738723 2140537 api_server.go:182] apiserver freezer: "7:freezer:/docker/4965e3d26eb69e914d403f254087de4a6f1eb7300c8d8a7833ccd1b2b8c36405/kubepods/burstable/pod6323821db9844f9574c71e267adf4996/2cb180c1348233b94758d497f339bab8aea738daa056b82c2478e888ed6dfd59"
	I0217 12:45:07.738810 2140537 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4965e3d26eb69e914d403f254087de4a6f1eb7300c8d8a7833ccd1b2b8c36405/kubepods/burstable/pod6323821db9844f9574c71e267adf4996/2cb180c1348233b94758d497f339bab8aea738daa056b82c2478e888ed6dfd59/freezer.state
	I0217 12:45:07.754327 2140537 api_server.go:204] freezer state: "THAWED"
	I0217 12:45:07.754362 2140537 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0217 12:45:07.768379 2140537 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0217 12:45:07.768411 2140537 status.go:463] ha-477073-m03 apiserver status = Running (err=<nil>)
	I0217 12:45:07.768421 2140537 status.go:176] ha-477073-m03 status: &{Name:ha-477073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:45:07.768458 2140537 status.go:174] checking status of ha-477073-m04 ...
	I0217 12:45:07.768795 2140537 cli_runner.go:164] Run: docker container inspect ha-477073-m04 --format={{.State.Status}}
	I0217 12:45:07.791461 2140537 status.go:371] ha-477073-m04 host status = "Running" (err=<nil>)
	I0217 12:45:07.791490 2140537 host.go:66] Checking if "ha-477073-m04" exists ...
	I0217 12:45:07.791793 2140537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-477073-m04
	I0217 12:45:07.809383 2140537 host.go:66] Checking if "ha-477073-m04" exists ...
	I0217 12:45:07.809835 2140537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:45:07.809880 2140537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-477073-m04
	I0217 12:45:07.830208 2140537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49807 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/ha-477073-m04/id_rsa Username:docker}
	I0217 12:45:07.922598 2140537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:45:07.935203 2140537 status.go:176] ha-477073-m04 status: &{Name:ha-477073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 node start m02 -v=7 --alsologtostderr: (17.629322219s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr: (1.07607529s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.004369844s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-477073 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-477073 -v=7 --alsologtostderr
E0217 12:45:43.246987 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.253952 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.265287 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.289949 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.331482 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.413601 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.574824 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:43.896490 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:44.538099 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:45.819869 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:48.381793 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:45:53.503151 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:46:03.745688 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-477073 -v=7 --alsologtostderr: (37.21595254s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-477073 --wait=true -v=7 --alsologtostderr
E0217 12:46:24.227235 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:47:05.188954 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-477073 --wait=true -v=7 --alsologtostderr: (1m37.286653815s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-477073
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 node delete m03 -v=7 --alsologtostderr: (9.740470019s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 stop -v=7 --alsologtostderr
E0217 12:48:27.113977 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 stop -v=7 --alsologtostderr: (35.783925583s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr: exit status 7 (120.963526ms)

                                                
                                                
-- stdout --
	ha-477073
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-477073-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-477073-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:48:30.540137 2155093 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:48:30.540500 2155093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:48:30.540517 2155093 out.go:358] Setting ErrFile to fd 2...
	I0217 12:48:30.540525 2155093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:48:30.540791 2155093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 12:48:30.540993 2155093 out.go:352] Setting JSON to false
	I0217 12:48:30.541036 2155093 mustload.go:65] Loading cluster: ha-477073
	I0217 12:48:30.541133 2155093 notify.go:220] Checking for updates...
	I0217 12:48:30.541475 2155093 config.go:182] Loaded profile config "ha-477073": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 12:48:30.541500 2155093 status.go:174] checking status of ha-477073 ...
	I0217 12:48:30.542127 2155093 cli_runner.go:164] Run: docker container inspect ha-477073 --format={{.State.Status}}
	I0217 12:48:30.561576 2155093 status.go:371] ha-477073 host status = "Stopped" (err=<nil>)
	I0217 12:48:30.561599 2155093 status.go:384] host is not running, skipping remaining checks
	I0217 12:48:30.561607 2155093 status.go:176] ha-477073 status: &{Name:ha-477073 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:48:30.561634 2155093 status.go:174] checking status of ha-477073-m02 ...
	I0217 12:48:30.561967 2155093 cli_runner.go:164] Run: docker container inspect ha-477073-m02 --format={{.State.Status}}
	I0217 12:48:30.587136 2155093 status.go:371] ha-477073-m02 host status = "Stopped" (err=<nil>)
	I0217 12:48:30.587161 2155093 status.go:384] host is not running, skipping remaining checks
	I0217 12:48:30.587169 2155093 status.go:176] ha-477073-m02 status: &{Name:ha-477073-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:48:30.587199 2155093 status.go:174] checking status of ha-477073-m04 ...
	I0217 12:48:30.587513 2155093 cli_runner.go:164] Run: docker container inspect ha-477073-m04 --format={{.State.Status}}
	I0217 12:48:30.605507 2155093 status.go:371] ha-477073-m04 host status = "Stopped" (err=<nil>)
	I0217 12:48:30.605529 2155093 status.go:384] host is not running, skipping remaining checks
	I0217 12:48:30.605537 2155093 status.go:176] ha-477073-m04 status: &{Name:ha-477073-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-477073 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0217 12:49:11.055349 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-477073 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.97537048s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (63.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-477073 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-477073 --control-plane -v=7 --alsologtostderr: (41.722219712s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-477073 status -v=7 --alsologtostderr: (1.039510212s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-721653 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0217 12:50:43.247178 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 12:51:10.961843 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-721653 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (52.022560048s)
--- PASS: TestJSONOutput/start/Command (52.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-721653 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-721653 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-721653 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-721653 --output=json --user=testUser: (5.780454664s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-850107 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-850107 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.845917ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ad19466-9892-489f-b4b8-4ffbc3ce93d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-850107] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"301cd35a-b748-4766-aad5-0c0843b74664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20427"}}
	{"specversion":"1.0","id":"7e27c90a-399c-4d67-9ded-88949f219e83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e6057689-ca3f-4350-9aee-edbab4c43ac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig"}}
	{"specversion":"1.0","id":"caf36485-db50-4c42-94d1-08893d5498d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube"}}
	{"specversion":"1.0","id":"f9a6578b-02b7-4f04-ae4c-3b235e0e0236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d9f40ca0-b281-45f3-8f9f-a132ddd6f1ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4d515fb3-415a-426a-bb8e-3beb7a8ead29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-850107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-850107
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-195188 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-195188 --network=: (39.026020764s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-195188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-195188
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-195188: (2.124981551s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-522745 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-522745 --network=bridge: (31.406354316s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-522745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-522745
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-522745: (2.020318159s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.46s)

                                                
                                    
x
+
TestKicExistingNetwork (33.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0217 12:52:46.006605 2085373 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0217 12:52:46.023545 2085373 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0217 12:52:46.023644 2085373 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0217 12:52:46.024401 2085373 cli_runner.go:164] Run: docker network inspect existing-network
W0217 12:52:46.041058 2085373 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0217 12:52:46.041095 2085373 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0217 12:52:46.041110 2085373 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0217 12:52:46.041312 2085373 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0217 12:52:46.059715 2085373 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc7c99ce384a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:96:07:c7:10} reservation:<nil>}
I0217 12:52:46.065308 2085373 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0217 12:52:46.065988 2085373 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d90320}
I0217 12:52:46.066042 2085373 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0217 12:52:46.066108 2085373 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0217 12:52:46.138537 2085373 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-560824 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-560824 --network=existing-network: (31.649408769s)
helpers_test.go:175: Cleaning up "existing-network-560824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-560824
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-560824: (1.961923564s)
I0217 12:53:19.766863 2085373 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.78s)

                                                
                                    
x
+
TestKicCustomSubnet (34.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-058030 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-058030 --subnet=192.168.60.0/24: (32.489325887s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-058030 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-058030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-058030
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-058030: (2.131156872s)
--- PASS: TestKicCustomSubnet (34.64s)

                                                
                                    
x
+
TestKicStaticIP (37.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-288772 --static-ip=192.168.200.200
E0217 12:54:11.055460 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-288772 --static-ip=192.168.200.200: (35.704773125s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-288772 ip
helpers_test.go:175: Cleaning up "static-ip-288772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-288772
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-288772: (2.098461653s)
--- PASS: TestKicStaticIP (37.96s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-614905 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-614905 --driver=docker  --container-runtime=containerd: (31.079375862s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-617843 --driver=docker  --container-runtime=containerd
E0217 12:55:34.122825 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-617843 --driver=docker  --container-runtime=containerd: (30.916681801s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-614905
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-617843
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-617843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-617843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-617843: (2.124142806s)
helpers_test.go:175: Cleaning up "first-614905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-614905
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-614905: (2.283337851s)
--- PASS: TestMinikubeProfile (67.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-034179 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0217 12:55:43.247157 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-034179 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.500325011s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-034179 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-036140 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-036140 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.867033946s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-036140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-034179 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-034179 --alsologtostderr -v=5: (1.648629697s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-036140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-036140
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-036140: (1.212935821s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-036140
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-036140: (6.387809957s)
--- PASS: TestMountStart/serial/RestartStopped (7.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-036140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-964214 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-964214 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.522594729s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-964214 -- rollout status deployment/busybox: (13.308648277s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-9tspz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-kwk9s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-9tspz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-kwk9s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-9tspz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-kwk9s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-9tspz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-9tspz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-kwk9s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-964214 -- exec busybox-58667487b6-kwk9s -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-964214 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-964214 -v 3 --alsologtostderr: (14.633595285s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-964214 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp testdata/cp-test.txt multinode-964214:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4005698872/001/cp-test_multinode-964214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214:/home/docker/cp-test.txt multinode-964214-m02:/home/docker/cp-test_multinode-964214_multinode-964214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m02 "sudo cat /home/docker/cp-test_multinode-964214_multinode-964214-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214:/home/docker/cp-test.txt multinode-964214-m03:/home/docker/cp-test_multinode-964214_multinode-964214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m03 "sudo cat /home/docker/cp-test_multinode-964214_multinode-964214-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp testdata/cp-test.txt multinode-964214-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4005698872/001/cp-test_multinode-964214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214-m02:/home/docker/cp-test.txt multinode-964214:/home/docker/cp-test_multinode-964214-m02_multinode-964214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214 "sudo cat /home/docker/cp-test_multinode-964214-m02_multinode-964214.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214-m02:/home/docker/cp-test.txt multinode-964214-m03:/home/docker/cp-test_multinode-964214-m02_multinode-964214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m03 "sudo cat /home/docker/cp-test_multinode-964214-m02_multinode-964214-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp testdata/cp-test.txt multinode-964214-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4005698872/001/cp-test_multinode-964214-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214-m03:/home/docker/cp-test.txt multinode-964214:/home/docker/cp-test_multinode-964214-m03_multinode-964214.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214 "sudo cat /home/docker/cp-test_multinode-964214-m03_multinode-964214.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 cp multinode-964214-m03:/home/docker/cp-test.txt multinode-964214-m02:/home/docker/cp-test_multinode-964214-m03_multinode-964214-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 ssh -n multinode-964214-m02 "sudo cat /home/docker/cp-test_multinode-964214-m03_multinode-964214-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-964214 node stop m03: (1.224572654s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-964214 status: exit status 7 (524.956712ms)

                                                
                                                
-- stdout --
	multinode-964214
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-964214-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-964214-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr: exit status 7 (518.757372ms)

                                                
                                                
-- stdout --
	multinode-964214
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-964214-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-964214-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 12:58:13.450338 2209521 out.go:345] Setting OutFile to fd 1 ...
	I0217 12:58:13.450616 2209521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:58:13.450651 2209521 out.go:358] Setting ErrFile to fd 2...
	I0217 12:58:13.450672 2209521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 12:58:13.450935 2209521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 12:58:13.451151 2209521 out.go:352] Setting JSON to false
	I0217 12:58:13.451223 2209521 mustload.go:65] Loading cluster: multinode-964214
	I0217 12:58:13.451311 2209521 notify.go:220] Checking for updates...
	I0217 12:58:13.451826 2209521 config.go:182] Loaded profile config "multinode-964214": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 12:58:13.451874 2209521 status.go:174] checking status of multinode-964214 ...
	I0217 12:58:13.452469 2209521 cli_runner.go:164] Run: docker container inspect multinode-964214 --format={{.State.Status}}
	I0217 12:58:13.475085 2209521 status.go:371] multinode-964214 host status = "Running" (err=<nil>)
	I0217 12:58:13.475109 2209521 host.go:66] Checking if "multinode-964214" exists ...
	I0217 12:58:13.475552 2209521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-964214
	I0217 12:58:13.498074 2209521 host.go:66] Checking if "multinode-964214" exists ...
	I0217 12:58:13.498457 2209521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:58:13.498512 2209521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-964214
	I0217 12:58:13.516859 2209521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49912 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/multinode-964214/id_rsa Username:docker}
	I0217 12:58:13.611089 2209521 ssh_runner.go:195] Run: systemctl --version
	I0217 12:58:13.615380 2209521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:58:13.627039 2209521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 12:58:13.684780 2209521 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-17 12:58:13.675080608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 12:58:13.685429 2209521 kubeconfig.go:125] found "multinode-964214" server: "https://192.168.58.2:8443"
	I0217 12:58:13.685467 2209521 api_server.go:166] Checking apiserver status ...
	I0217 12:58:13.685535 2209521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0217 12:58:13.696739 2209521 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	I0217 12:58:13.706216 2209521 api_server.go:182] apiserver freezer: "7:freezer:/docker/b4b0e8dcf45fdd8dd6dfbd7004ca14bc0891033501bd0b4fd354bd31e7fe9f73/kubepods/burstable/pod03a122ed53ee81d8e48e7f24b2c9f289/c87126909cfef3898f3b20531f37048f83c1298514635810d46db7e1a5242ecf"
	I0217 12:58:13.706289 2209521 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b4b0e8dcf45fdd8dd6dfbd7004ca14bc0891033501bd0b4fd354bd31e7fe9f73/kubepods/burstable/pod03a122ed53ee81d8e48e7f24b2c9f289/c87126909cfef3898f3b20531f37048f83c1298514635810d46db7e1a5242ecf/freezer.state
	I0217 12:58:13.715339 2209521 api_server.go:204] freezer state: "THAWED"
	I0217 12:58:13.715373 2209521 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0217 12:58:13.724307 2209521 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0217 12:58:13.724334 2209521 status.go:463] multinode-964214 apiserver status = Running (err=<nil>)
	I0217 12:58:13.724346 2209521 status.go:176] multinode-964214 status: &{Name:multinode-964214 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:58:13.724363 2209521 status.go:174] checking status of multinode-964214-m02 ...
	I0217 12:58:13.724675 2209521 cli_runner.go:164] Run: docker container inspect multinode-964214-m02 --format={{.State.Status}}
	I0217 12:58:13.745013 2209521 status.go:371] multinode-964214-m02 host status = "Running" (err=<nil>)
	I0217 12:58:13.745041 2209521 host.go:66] Checking if "multinode-964214-m02" exists ...
	I0217 12:58:13.745434 2209521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-964214-m02
	I0217 12:58:13.771000 2209521 host.go:66] Checking if "multinode-964214-m02" exists ...
	I0217 12:58:13.771320 2209521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0217 12:58:13.771367 2209521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-964214-m02
	I0217 12:58:13.793022 2209521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49917 SSHKeyPath:/home/jenkins/minikube-integration/20427-2080001/.minikube/machines/multinode-964214-m02/id_rsa Username:docker}
	I0217 12:58:13.882697 2209521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0217 12:58:13.894331 2209521 status.go:176] multinode-964214-m02 status: &{Name:multinode-964214-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0217 12:58:13.894371 2209521 status.go:174] checking status of multinode-964214-m03 ...
	I0217 12:58:13.894672 2209521 cli_runner.go:164] Run: docker container inspect multinode-964214-m03 --format={{.State.Status}}
	I0217 12:58:13.912023 2209521 status.go:371] multinode-964214-m03 host status = "Stopped" (err=<nil>)
	I0217 12:58:13.912046 2209521 status.go:384] host is not running, skipping remaining checks
	I0217 12:58:13.912054 2209521 status.go:176] multinode-964214-m03 status: &{Name:multinode-964214-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-964214 node start m03 -v=7 --alsologtostderr: (10.204480749s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-964214
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-964214
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-964214: (24.915794948s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-964214 --wait=true -v=8 --alsologtostderr
E0217 12:59:11.055304 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-964214 --wait=true -v=8 --alsologtostderr: (1m2.922076386s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-964214
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-964214 node delete m03: (4.649779447s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-964214 stop: (23.715954171s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-964214 status: exit status 7 (109.312591ms)

                                                
                                                
-- stdout --
	multinode-964214
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-964214-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr: exit status 7 (93.268269ms)

                                                
                                                
-- stdout --
	multinode-964214
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-964214-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 13:00:22.117469 2217567 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:00:22.117584 2217567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:00:22.117594 2217567 out.go:358] Setting ErrFile to fd 2...
	I0217 13:00:22.117600 2217567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:00:22.117902 2217567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 13:00:22.118090 2217567 out.go:352] Setting JSON to false
	I0217 13:00:22.118137 2217567 mustload.go:65] Loading cluster: multinode-964214
	I0217 13:00:22.118571 2217567 config.go:182] Loaded profile config "multinode-964214": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 13:00:22.118597 2217567 status.go:174] checking status of multinode-964214 ...
	I0217 13:00:22.119134 2217567 cli_runner.go:164] Run: docker container inspect multinode-964214 --format={{.State.Status}}
	I0217 13:00:22.119422 2217567 notify.go:220] Checking for updates...
	I0217 13:00:22.137920 2217567 status.go:371] multinode-964214 host status = "Stopped" (err=<nil>)
	I0217 13:00:22.137943 2217567 status.go:384] host is not running, skipping remaining checks
	I0217 13:00:22.137950 2217567 status.go:176] multinode-964214 status: &{Name:multinode-964214 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0217 13:00:22.137986 2217567 status.go:174] checking status of multinode-964214-m02 ...
	I0217 13:00:22.138312 2217567 cli_runner.go:164] Run: docker container inspect multinode-964214-m02 --format={{.State.Status}}
	I0217 13:00:22.154695 2217567 status.go:371] multinode-964214-m02 host status = "Stopped" (err=<nil>)
	I0217 13:00:22.154714 2217567 status.go:384] host is not running, skipping remaining checks
	I0217 13:00:22.154721 2217567 status.go:176] multinode-964214-m02 status: &{Name:multinode-964214-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (63.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-964214 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0217 13:00:43.247589 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-964214 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.000100077s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-964214 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (63.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-964214
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-964214-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-964214-m02 --driver=docker  --container-runtime=containerd: exit status 14 (108.17465ms)

                                                
                                                
-- stdout --
	* [multinode-964214-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-964214-m02' is duplicated with machine name 'multinode-964214-m02' in profile 'multinode-964214'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-964214-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-964214-m03 --driver=docker  --container-runtime=containerd: (29.986705853s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-964214
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-964214: exit status 80 (412.983166ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-964214 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-964214-m03 already exists in multinode-964214-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_13.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-964214-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-964214-m03: (1.976806966s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.54s)

                                                
                                    
x
+
TestPreload (119.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-188154 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0217 13:02:06.323734 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-188154 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.277961819s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-188154 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-188154 image pull gcr.io/k8s-minikube/busybox: (2.009433779s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-188154
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-188154: (12.003053313s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-188154 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-188154 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (29.349244941s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-188154 image list
helpers_test.go:175: Cleaning up "test-preload-188154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-188154
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-188154: (2.467054375s)
--- PASS: TestPreload (119.33s)

                                                
                                    
x
+
TestScheduledStopUnix (106.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-213415 --memory=2048 --driver=docker  --container-runtime=containerd
E0217 13:04:11.055380 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-213415 --memory=2048 --driver=docker  --container-runtime=containerd: (30.15473246s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-213415 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-213415 -n scheduled-stop-213415
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-213415 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0217 13:04:32.648797 2085373 retry.go:31] will retry after 58.32µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.649729 2085373 retry.go:31] will retry after 115.371µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.650975 2085373 retry.go:31] will retry after 144.297µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.651268 2085373 retry.go:31] will retry after 200.371µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.653168 2085373 retry.go:31] will retry after 687.876µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.654287 2085373 retry.go:31] will retry after 921.651µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.655399 2085373 retry.go:31] will retry after 837.579µs: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.656515 2085373 retry.go:31] will retry after 1.979318ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.658695 2085373 retry.go:31] will retry after 3.508674ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.662922 2085373 retry.go:31] will retry after 2.925806ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.666142 2085373 retry.go:31] will retry after 6.626662ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.673355 2085373 retry.go:31] will retry after 6.36759ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.680599 2085373 retry.go:31] will retry after 16.205756ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.697823 2085373 retry.go:31] will retry after 26.18002ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.725088 2085373 retry.go:31] will retry after 23.761019ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
I0217 13:04:32.749341 2085373 retry.go:31] will retry after 53.392698ms: open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/scheduled-stop-213415/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-213415 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-213415 -n scheduled-stop-213415
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-213415
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-213415 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0217 13:05:43.247552 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-213415
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-213415: exit status 7 (78.42921ms)

                                                
                                                
-- stdout --
	scheduled-stop-213415
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-213415 -n scheduled-stop-213415
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-213415 -n scheduled-stop-213415: exit status 7 (76.619934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-213415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-213415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-213415: (5.155158589s)
--- PASS: TestScheduledStopUnix (106.92s)

                                                
                                    
x
+
TestInsufficientStorage (12.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-921138 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-921138 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.320189927s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18c50d4b-8b1b-45b1-8f3f-678d5b2edf26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-921138] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37a922ea-7620-4062-8256-ae3867abb1eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20427"}}
	{"specversion":"1.0","id":"8f475493-daa6-4270-a25f-6f24dc14127b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2893bd3-25b8-492c-a84e-64dbeb152698","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig"}}
	{"specversion":"1.0","id":"36bcbb32-cd61-47d4-89d5-eb218233ae9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube"}}
	{"specversion":"1.0","id":"43682ce0-dbef-450f-af96-6706ff4a0967","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9c706e7a-ba73-4de1-8ea9-b5ac905cac72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f62043e-012c-42d7-a63f-b62ad2334f4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a07d7e57-460d-4d0d-8c20-570046223886","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"12ed8ee0-167d-49bc-84e7-e275b8152fd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"30c94238-534d-4912-a249-0fc88c762b71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a6c8ca4d-4336-4c55-aaaa-3f1df79a677f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-921138\" primary control-plane node in \"insufficient-storage-921138\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"535614ff-bbee-47ed-accb-470bf489c0ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1739182054-20387 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9b95cc7-f27a-47b8-ae3f-a6c7c55991f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"78146f46-b6cd-41c8-9b85-76fe30ab4d4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-921138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-921138 --output=json --layout=cluster: exit status 7 (299.102122ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-921138","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-921138","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0217 13:05:59.484793 2236490 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-921138" does not appear in /home/jenkins/minikube-integration/20427-2080001/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-921138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-921138 --output=json --layout=cluster: exit status 7 (289.493295ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-921138","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-921138","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0217 13:05:59.776192 2236551 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-921138" does not appear in /home/jenkins/minikube-integration/20427-2080001/kubeconfig
	E0217 13:05:59.786205 2236551 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/insufficient-storage-921138/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-921138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-921138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-921138: (2.084866477s)
--- PASS: TestInsufficientStorage (12.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1050882175 start -p running-upgrade-627460 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0217 13:12:14.124864 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1050882175 start -p running-upgrade-627460 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.70120949s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-627460 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-627460 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.487595939s)
helpers_test.go:175: Cleaning up "running-upgrade-627460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-627460
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-627460: (3.525462254s)
--- PASS: TestRunningBinaryUpgrade (88.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.600868765s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-648571
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-648571: (1.237297253s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-648571 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-648571 status --format={{.Host}}: exit status 7 (75.862826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.561682011s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-648571 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (149.337224ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-648571] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-648571
	    minikube start -p kubernetes-upgrade-648571 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6485712 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-648571 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-648571 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (10.561053662s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-648571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-648571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-648571: (3.662717226s)
--- PASS: TestKubernetesUpgrade (351.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (182.2s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.790462346 start -p missing-upgrade-232647 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.790462346 start -p missing-upgrade-232647 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.247102866s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-232647
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-232647: (10.314668846s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-232647
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-232647 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0217 13:09:11.055311 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-232647 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.588930639s)
helpers_test.go:175: Cleaning up "missing-upgrade-232647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-232647
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-232647: (2.323222605s)
--- PASS: TestMissingContainerUpgrade (182.20s)

                                                
                                    
x
+
TestPause/serial/Start (65.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-873495 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-873495 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m5.101236326s)
--- PASS: TestPause/serial/Start (65.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-065794 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-065794 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (106.151116ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-065794] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-065794 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-065794 --driver=docker  --container-runtime=containerd: (42.007506969s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-065794 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-065794 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-065794 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.996248083s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-065794 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-065794 status -o json: exit status 2 (318.866726ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-065794","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-065794
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-065794: (2.197747236s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-065794 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-065794 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.714830022s)
--- PASS: TestNoKubernetes/serial/Start (5.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-873495 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-873495 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.438485617s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-065794 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-065794 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.338494ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-065794
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-065794: (1.306412279s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-065794 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-065794 --driver=docker  --container-runtime=containerd: (7.327033904s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-873495 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-873495 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-873495 --output=json --layout=cluster: exit status 2 (394.768312ms)

                                                
                                                
-- stdout --
	{"Name":"pause-873495","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-873495","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-873495 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-873495 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-873495 --alsologtostderr -v=5: (1.000234743s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-873495 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-873495 --alsologtostderr -v=5: (2.860004826s)
--- PASS: TestPause/serial/DeletePaused (2.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-065794 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-065794 "sudo systemctl is-active --quiet service kubelet": exit status 1 (385.068731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-873495
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-873495: exit status 1 (28.792238ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-873495: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1592238984 start -p stopped-upgrade-509482 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0217 13:10:43.247063 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1592238984 start -p stopped-upgrade-509482 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (35.718284591s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1592238984 -p stopped-upgrade-509482 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1592238984 -p stopped-upgrade-509482 stop: (19.916023193s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-509482 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-509482 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.290121359s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (98.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-509482
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-509482: (1.432298536s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-675133 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-675133 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (271.429937ms)

                                                
                                                
-- stdout --
	* [false-675133] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0217 13:13:39.219476 2277245 out.go:345] Setting OutFile to fd 1 ...
	I0217 13:13:39.219714 2277245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:13:39.219741 2277245 out.go:358] Setting ErrFile to fd 2...
	I0217 13:13:39.219760 2277245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0217 13:13:39.220024 2277245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20427-2080001/.minikube/bin
	I0217 13:13:39.220473 2277245 out.go:352] Setting JSON to false
	I0217 13:13:39.221460 2277245 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":309183,"bootTime":1739488837,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1077-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0217 13:13:39.221564 2277245 start.go:139] virtualization:  
	I0217 13:13:39.225728 2277245 out.go:177] * [false-675133] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0217 13:13:39.228930 2277245 out.go:177]   - MINIKUBE_LOCATION=20427
	I0217 13:13:39.228999 2277245 notify.go:220] Checking for updates...
	I0217 13:13:39.236052 2277245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0217 13:13:39.239117 2277245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20427-2080001/kubeconfig
	I0217 13:13:39.242101 2277245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20427-2080001/.minikube
	I0217 13:13:39.245006 2277245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0217 13:13:39.247834 2277245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0217 13:13:39.251384 2277245 config.go:182] Loaded profile config "force-systemd-flag-789189": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0217 13:13:39.251488 2277245 driver.go:394] Setting default libvirt URI to qemu:///system
	I0217 13:13:39.291364 2277245 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0217 13:13:39.291482 2277245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0217 13:13:39.404398 2277245 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-17 13:13:39.371198561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1077-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0217 13:13:39.404506 2277245 docker.go:318] overlay module found
	I0217 13:13:39.407655 2277245 out.go:177] * Using the docker driver based on user configuration
	I0217 13:13:39.410606 2277245 start.go:297] selected driver: docker
	I0217 13:13:39.410626 2277245 start.go:901] validating driver "docker" against <nil>
	I0217 13:13:39.410639 2277245 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0217 13:13:39.414131 2277245 out.go:201] 
	W0217 13:13:39.416941 2277245 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0217 13:13:39.419667 2277245 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-675133 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-675133" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-675133

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-675133"

                                                
                                                
----------------------- debugLogs end: false-675133 [took: 4.667117787s] --------------------------------
helpers_test.go:175: Cleaning up "false-675133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-675133
--- PASS: TestNetworkPlugins/group/false (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0217 13:15:43.246906 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-684625 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m33.555768231s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-684625 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bff5b8f0-6b85-450b-804b-24e5e32c97ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bff5b8f0-6b85-450b-804b-24e5e32c97ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003898891s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-684625 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-684625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-684625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.379789864s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-684625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-695080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-695080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m8.507676572s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-684625 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-684625 --alsologtostderr -v=3: (13.542863997s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684625 -n old-k8s-version-684625
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-684625 -n old-k8s-version-684625: exit status 7 (100.546255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-684625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-695080 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce36c758-f697-4580-8483-8a425705b75b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce36c758-f697-4580-8483-8a425705b75b] Running
E0217 13:19:11.055355 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004276696s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-695080 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-695080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-695080 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-695080 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-695080 --alsologtostderr -v=3: (12.069061625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-695080 -n no-preload-695080
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-695080 -n no-preload-695080: exit status 7 (129.089333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-695080 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (276.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-695080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0217 13:20:43.247629 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-695080 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m35.882360962s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-695080 -n no-preload-695080
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (276.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8ml5k" [591d430f-7a2f-49f9-a02b-98aa3d81069e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003209209s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8ml5k" [591d430f-7a2f-49f9-a02b-98aa3d81069e] Running
E0217 13:24:11.055496 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003627878s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-695080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-695080 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-695080 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-695080 -n no-preload-695080
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-695080 -n no-preload-695080: exit status 2 (351.428147ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-695080 -n no-preload-695080
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-695080 -n no-preload-695080: exit status 2 (319.473105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-695080 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-695080 -n no-preload-695080
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-695080 -n no-preload-695080
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-652383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-652383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m11.252054441s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bpfhq" [a52b51c1-8d66-4d6e-83fb-a3a5305c115a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003607186s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bpfhq" [a52b51c1-8d66-4d6e-83fb-a3a5305c115a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003742081s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-684625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-684625 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-684625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-684625 --alsologtostderr -v=1: (1.134604874s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684625 -n old-k8s-version-684625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684625 -n old-k8s-version-684625: exit status 2 (426.83395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-684625 -n old-k8s-version-684625
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-684625 -n old-k8s-version-684625: exit status 2 (409.913956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-684625 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-684625 --alsologtostderr -v=1: (1.059365034s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-684625 -n old-k8s-version-684625
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-684625 -n old-k8s-version-684625
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-496152 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-496152 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (55.352430847s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-652383 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [699ceb36-a37b-48db-bb9a-66ee87984ec7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [699ceb36-a37b-48db-bb9a-66ee87984ec7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004092111s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-652383 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-652383 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0217 13:25:43.247576 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-652383 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.00493761s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-652383 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-652383 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-652383 --alsologtostderr -v=3: (12.312845502s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-496152 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1c8efbc0-489a-4a59-9256-9effdedce220] Pending
helpers_test.go:344: "busybox" [1c8efbc0-489a-4a59-9256-9effdedce220] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1c8efbc0-489a-4a59-9256-9effdedce220] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003781092s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-496152 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-496152 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-496152 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-496152 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-496152 --alsologtostderr -v=3: (12.343779844s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-652383 -n embed-certs-652383
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-652383 -n embed-certs-652383: exit status 7 (118.873201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-652383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-652383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-652383 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m50.219341667s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-652383 -n embed-certs-652383
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152: exit status 7 (143.188104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-496152 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-496152 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0217 13:27:42.842168 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:42.849174 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:42.860640 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:42.881987 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:42.923467 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:43.005036 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:43.166699 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:43.488651 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:44.130473 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:45.412525 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:47.973932 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:27:53.095855 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:28:03.337514 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:28:23.819598 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:28:54.126310 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.170340 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.176925 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.188384 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.209800 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.251399 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.332803 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.494249 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:03.816343 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:04.458237 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:04.781820 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:05.740021 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:08.301808 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:11.055607 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:13.423188 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:23.665334 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:29:44.147434 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:30:25.109096 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:30:26.703261 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-496152 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m30.598271245s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8vgxx" [a0461d7c-9473-4ce6-a4a5-98d0a1a1ff23] Running
E0217 13:30:43.247504 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002975148s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8vgxx" [a0461d7c-9473-4ce6-a4a5-98d0a1a1ff23] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004111633s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-496152 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8smxc" [7d87ab9c-9a0f-42c1-90e6-1d3474aab750] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003721105s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-496152 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-496152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152: exit status 2 (335.241632ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152: exit status 2 (330.075599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-496152 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-496152 -n default-k8s-diff-port-496152
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8smxc" [7d87ab9c-9a0f-42c1-90e6-1d3474aab750] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004392761s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-652383 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-819150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-819150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (43.640013658s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-652383 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-652383 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-652383 -n embed-certs-652383
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-652383 -n embed-certs-652383: exit status 2 (392.775817ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-652383 -n embed-certs-652383
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-652383 -n embed-certs-652383: exit status 2 (366.448412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-652383 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-652383 --alsologtostderr -v=1: (1.12839458s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-652383 -n embed-certs-652383
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-652383 -n embed-certs-652383
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m13.969025252s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-819150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-819150 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.899822753s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-819150 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-819150 --alsologtostderr -v=3: (1.374870135s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-819150 -n newest-cni-819150
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-819150 -n newest-cni-819150: exit status 7 (138.219755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-819150 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-819150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0217 13:31:47.031064 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-819150 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (17.318463595s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-819150 -n newest-cni-819150
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-819150 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-819150 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-819150 -n newest-cni-819150
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-819150 -n newest-cni-819150: exit status 2 (372.829181ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-819150 -n newest-cni-819150
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-819150 -n newest-cni-819150: exit status 2 (369.387501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-819150 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-819150 -n newest-cni-819150
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-819150 -n newest-cni-819150
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.05s)
E0217 13:37:05.879700 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.502526 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.509185 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.520594 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.542008 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.583613 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.665169 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:20.827203 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m7.826364815s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-675133 "pgrep -a kubelet"
I0217 13:32:20.179487 2085373 config.go:182] Loaded profile config "auto-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-675133 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h9k6p" [7881460b-89e1-48d5-824f-5a300f52e27a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h9k6p" [7881460b-89e1-48d5-824f-5a300f52e27a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004937456s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0217 13:33:10.544903 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/old-k8s-version-684625/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.594437388s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f8s58" [a7684640-b80e-4880-8e11-f045af57ecd4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003580778s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-675133 "pgrep -a kubelet"
I0217 13:33:20.525320 2085373 config.go:182] Loaded profile config "kindnet-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-675133 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mwbcp" [2bea281c-9390-4df9-bb50-4533bf4ce1ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mwbcp" [2bea281c-9390-4df9-bb50-4533bf4ce1ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003897237s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0217 13:34:03.170262 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/no-preload-695080/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.942706756s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zcxj6" [c8eea8bc-c784-4be9-8cb2-b026979f6f7c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004641549s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-675133 "pgrep -a kubelet"
I0217 13:34:10.458924 2085373 config.go:182] Loaded profile config "calico-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-675133 replace --force -f testdata/netcat-deployment.yaml
I0217 13:34:10.785061 2085373 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jrfq9" [c9a3b786-ac04-4cbf-a1c5-54856cf22e90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 13:34:11.055199 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/addons-767669/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-jrfq9" [c9a3b786-ac04-4cbf-a1c5-54856cf22e90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004440731s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.158709597s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-675133 "pgrep -a kubelet"
I0217 13:34:50.890182 2085373 config.go:182] Loaded profile config "custom-flannel-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-675133 replace --force -f testdata/netcat-deployment.yaml
I0217 13:34:51.427040 2085373 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vxqc2" [051e4db7-0d05-4cf1-a56c-4ecfb6024883] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vxqc2" [051e4db7-0d05-4cf1-a56c-4ecfb6024883] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004334384s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0217 13:35:43.246789 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/functional-082454/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:43.939531 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:43.945920 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:43.957318 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:43.978709 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:44.020064 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:44.101711 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:44.263690 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:44.585304 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:45.227109 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:46.509409 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:49.071050 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:35:54.192952 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.978648882s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-675133 "pgrep -a kubelet"
I0217 13:36:02.824327 2085373 config.go:182] Loaded profile config "enable-default-cni-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-675133 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hglxf" [5ac9af60-8c1f-48a4-a762-97c7daaaf8b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 13:36:04.435370 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-hglxf" [5ac9af60-8c1f-48a4-a762-97c7daaaf8b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003442995s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mdnhw" [20acb37a-e889-48aa-bb87-4b722715806d] Running
E0217 13:36:24.917143 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/default-k8s-diff-port-496152/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00498596s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-675133 "pgrep -a kubelet"
I0217 13:36:27.107209 2085373 config.go:182] Loaded profile config "flannel-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-675133 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-j7jg8" [ae41ef9f-a25a-4ea2-8919-9f883a9c576b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-j7jg8" [ae41ef9f-a25a-4ea2-8919-9f883a9c576b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003128259s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-675133 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (46.024780271s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-675133 "pgrep -a kubelet"
E0217 13:37:21.149397 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
I0217 13:37:21.279407 2085373 config.go:182] Loaded profile config "bridge-675133": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-675133 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-d2vhj" [bc8cbe12-a4f1-42bc-b002-9f4ee17fb73e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0217 13:37:21.790873 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
E0217 13:37:23.073079 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-d2vhj" [bc8cbe12-a4f1-42bc-b002-9f4ee17fb73e] Running
E0217 13:37:25.634562 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003583256s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-675133 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0217 13:37:30.756759 2085373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20427-2080001/.minikube/profiles/auto-675133/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-675133 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-026566 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-026566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-026566
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-282278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-282278
--- SKIP: TestStartStop/group/disable-driver-mounts (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-675133 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-675133" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-675133

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-675133"

                                                
                                                
----------------------- debugLogs end: kubenet-675133 [took: 4.618694334s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-675133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-675133
--- SKIP: TestNetworkPlugins/group/kubenet (4.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-675133 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-675133" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-675133

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-675133" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-675133"

                                                
                                                
----------------------- debugLogs end: cilium-675133 [took: 5.753680484s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-675133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-675133
--- SKIP: TestNetworkPlugins/group/cilium (6.05s)

                                                
                                    
Copied to clipboard