Test Report: Docker_Linux_containerd_arm64 20109

                    
                      a80036b9799ef97ff87d49db0998430356d1f02a:2025-01-20:37996
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 385.51
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (385.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0120 17:46:43.279884    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:47:01.700159    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m20.326132357s)

                                                
                                                
-- stdout --
	* [old-k8s-version-145659] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-145659" primary control-plane node in "old-k8s-version-145659" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-145659" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-145659 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:46:05.924027  216535 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:46:05.924133  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:46:05.924138  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:46:05.924143  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:46:05.927074  216535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:46:05.927700  216535 out.go:352] Setting JSON to false
	I0120 17:46:05.929358  216535 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5310,"bootTime":1737389856,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 17:46:05.929494  216535 start.go:139] virtualization:  
	I0120 17:46:05.934415  216535 out.go:177] * [old-k8s-version-145659] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 17:46:05.937769  216535 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 17:46:05.937939  216535 notify.go:220] Checking for updates...
	I0120 17:46:05.943752  216535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 17:46:05.946745  216535 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:46:05.949651  216535 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 17:46:05.953188  216535 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 17:46:05.956057  216535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 17:46:05.959519  216535 config.go:182] Loaded profile config "old-k8s-version-145659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 17:46:05.962887  216535 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 17:46:05.965681  216535 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 17:46:06.002743  216535 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 17:46:06.002881  216535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:46:06.093668  216535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 17:46:06.084132487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:46:06.093786  216535 docker.go:318] overlay module found
	I0120 17:46:06.096967  216535 out.go:177] * Using the docker driver based on existing profile
	I0120 17:46:06.099823  216535 start.go:297] selected driver: docker
	I0120 17:46:06.099846  216535 start.go:901] validating driver "docker" against &{Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:46:06.099966  216535 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 17:46:06.100696  216535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:46:06.206608  216535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 17:46:06.19724018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:46:06.207017  216535 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 17:46:06.207037  216535 cni.go:84] Creating CNI manager for ""
	I0120 17:46:06.207072  216535 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 17:46:06.207105  216535 start.go:340] cluster config:
	{Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:46:06.210546  216535 out.go:177] * Starting "old-k8s-version-145659" primary control-plane node in "old-k8s-version-145659" cluster
	I0120 17:46:06.213362  216535 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 17:46:06.216265  216535 out.go:177] * Pulling base image v0.0.46 ...
	I0120 17:46:06.219148  216535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 17:46:06.219203  216535 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 17:46:06.219212  216535 cache.go:56] Caching tarball of preloaded images
	I0120 17:46:06.219305  216535 preload.go:172] Found /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0120 17:46:06.219313  216535 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0120 17:46:06.219460  216535 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/config.json ...
	I0120 17:46:06.219694  216535 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 17:46:06.245850  216535 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0120 17:46:06.245871  216535 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0120 17:46:06.245884  216535 cache.go:227] Successfully downloaded all kic artifacts
	I0120 17:46:06.245915  216535 start.go:360] acquireMachinesLock for old-k8s-version-145659: {Name:mkc018e598a91196e1dc19a35c434f89ff9fd55d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 17:46:06.245970  216535 start.go:364] duration metric: took 35.25µs to acquireMachinesLock for "old-k8s-version-145659"
	I0120 17:46:06.245988  216535 start.go:96] Skipping create...Using existing machine configuration
	I0120 17:46:06.245993  216535 fix.go:54] fixHost starting: 
	I0120 17:46:06.246246  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:06.275678  216535 fix.go:112] recreateIfNeeded on old-k8s-version-145659: state=Stopped err=<nil>
	W0120 17:46:06.275704  216535 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 17:46:06.279302  216535 out.go:177] * Restarting existing docker container for "old-k8s-version-145659" ...
	I0120 17:46:06.283509  216535 cli_runner.go:164] Run: docker start old-k8s-version-145659
	I0120 17:46:06.662660  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:06.687634  216535 kic.go:430] container "old-k8s-version-145659" state is running.
	I0120 17:46:06.688026  216535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145659
	I0120 17:46:06.718777  216535 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/config.json ...
	I0120 17:46:06.718997  216535 machine.go:93] provisionDockerMachine start ...
	I0120 17:46:06.719060  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:06.754965  216535 main.go:141] libmachine: Using SSH client type: native
	I0120 17:46:06.755236  216535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0120 17:46:06.755252  216535 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 17:46:06.755979  216535 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0120 17:46:09.879374  216535 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145659
	
	I0120 17:46:09.879396  216535 ubuntu.go:169] provisioning hostname "old-k8s-version-145659"
	I0120 17:46:09.879468  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:09.923478  216535 main.go:141] libmachine: Using SSH client type: native
	I0120 17:46:09.923728  216535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0120 17:46:09.923739  216535 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-145659 && echo "old-k8s-version-145659" | sudo tee /etc/hostname
	I0120 17:46:10.068627  216535 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-145659
	
	I0120 17:46:10.068717  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:10.092361  216535 main.go:141] libmachine: Using SSH client type: native
	I0120 17:46:10.092623  216535 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0120 17:46:10.092647  216535 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-145659' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-145659/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-145659' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 17:46:10.220671  216535 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 17:46:10.220708  216535 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2518/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2518/.minikube}
	I0120 17:46:10.220739  216535 ubuntu.go:177] setting up certificates
	I0120 17:46:10.220749  216535 provision.go:84] configureAuth start
	I0120 17:46:10.220837  216535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145659
	I0120 17:46:10.247201  216535 provision.go:143] copyHostCerts
	I0120 17:46:10.247292  216535 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem, removing ...
	I0120 17:46:10.247306  216535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem
	I0120 17:46:10.247404  216535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem (1679 bytes)
	I0120 17:46:10.247548  216535 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem, removing ...
	I0120 17:46:10.247559  216535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem
	I0120 17:46:10.247591  216535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem (1082 bytes)
	I0120 17:46:10.247685  216535 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem, removing ...
	I0120 17:46:10.247696  216535 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem
	I0120 17:46:10.247731  216535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem (1123 bytes)
	I0120 17:46:10.247806  216535 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-145659 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-145659]
	I0120 17:46:11.195154  216535 provision.go:177] copyRemoteCerts
	I0120 17:46:11.195332  216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 17:46:11.205365  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:11.226258  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:11.317569  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 17:46:11.343192  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 17:46:11.369292  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 17:46:11.395976  216535 provision.go:87] duration metric: took 1.175208097s to configureAuth
	I0120 17:46:11.396054  216535 ubuntu.go:193] setting minikube options for container-runtime
	I0120 17:46:11.396300  216535 config.go:182] Loaded profile config "old-k8s-version-145659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 17:46:11.396329  216535 machine.go:96] duration metric: took 4.67731771s to provisionDockerMachine
	I0120 17:46:11.396364  216535 start.go:293] postStartSetup for "old-k8s-version-145659" (driver="docker")
	I0120 17:46:11.396393  216535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 17:46:11.396476  216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 17:46:11.396543  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:11.418332  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:11.509234  216535 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 17:46:11.513156  216535 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 17:46:11.513190  216535 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 17:46:11.513202  216535 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 17:46:11.513208  216535 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 17:46:11.513218  216535 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/addons for local assets ...
	I0120 17:46:11.513277  216535 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/files for local assets ...
	I0120 17:46:11.513353  216535 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem -> 78442.pem in /etc/ssl/certs
	I0120 17:46:11.513451  216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 17:46:11.522719  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /etc/ssl/certs/78442.pem (1708 bytes)
	I0120 17:46:11.549248  216535 start.go:296] duration metric: took 152.851001ms for postStartSetup
	I0120 17:46:11.549396  216535 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:46:11.549456  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:11.569060  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:11.656223  216535 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 17:46:11.666597  216535 fix.go:56] duration metric: took 5.42059619s for fixHost
	I0120 17:46:11.666633  216535 start.go:83] releasing machines lock for "old-k8s-version-145659", held for 5.420653897s
	I0120 17:46:11.666702  216535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-145659
	I0120 17:46:11.692738  216535 ssh_runner.go:195] Run: cat /version.json
	I0120 17:46:11.692789  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:11.692844  216535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 17:46:11.692925  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:11.723463  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:11.731314  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:11.831454  216535 ssh_runner.go:195] Run: systemctl --version
	I0120 17:46:11.976016  216535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 17:46:11.980625  216535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0120 17:46:12.004291  216535 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0120 17:46:12.004452  216535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 17:46:12.015676  216535 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 17:46:12.015741  216535 start.go:495] detecting cgroup driver to use...
	I0120 17:46:12.015788  216535 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 17:46:12.015865  216535 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 17:46:12.032837  216535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 17:46:12.047395  216535 docker.go:217] disabling cri-docker service (if available) ...
	I0120 17:46:12.047542  216535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 17:46:12.063598  216535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 17:46:12.077573  216535 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 17:46:12.189191  216535 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 17:46:12.299654  216535 docker.go:233] disabling docker service ...
	I0120 17:46:12.299769  216535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 17:46:12.314439  216535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 17:46:12.326906  216535 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 17:46:12.434394  216535 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 17:46:12.553091  216535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 17:46:12.568529  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 17:46:12.586525  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0120 17:46:12.596704  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 17:46:12.606713  216535 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 17:46:12.606832  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 17:46:12.616823  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 17:46:12.626682  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 17:46:12.636497  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 17:46:12.646523  216535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 17:46:12.656022  216535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 17:46:12.666080  216535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 17:46:12.675830  216535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 17:46:12.684732  216535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 17:46:12.791875  216535 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 17:46:12.998071  216535 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 17:46:12.998189  216535 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 17:46:13.002732  216535 start.go:563] Will wait 60s for crictl version
	I0120 17:46:13.002876  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:46:13.007296  216535 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 17:46:13.066060  216535 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0120 17:46:13.066173  216535 ssh_runner.go:195] Run: containerd --version
	I0120 17:46:13.087981  216535 ssh_runner.go:195] Run: containerd --version
	I0120 17:46:13.115409  216535 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0120 17:46:13.118792  216535 cli_runner.go:164] Run: docker network inspect old-k8s-version-145659 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 17:46:13.141085  216535 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0120 17:46:13.147849  216535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 17:46:13.160422  216535 kubeadm.go:883] updating cluster {Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 17:46:13.160554  216535 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 17:46:13.160612  216535 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 17:46:13.214208  216535 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 17:46:13.214232  216535 containerd.go:534] Images already preloaded, skipping extraction
	I0120 17:46:13.214289  216535 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 17:46:13.261217  216535 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 17:46:13.261241  216535 cache_images.go:84] Images are preloaded, skipping loading
	I0120 17:46:13.261249  216535 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0120 17:46:13.261359  216535 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-145659 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 17:46:13.261430  216535 ssh_runner.go:195] Run: sudo crictl info
	I0120 17:46:13.313120  216535 cni.go:84] Creating CNI manager for ""
	I0120 17:46:13.313148  216535 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 17:46:13.313159  216535 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 17:46:13.313179  216535 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-145659 NodeName:old-k8s-version-145659 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 17:46:13.313307  216535 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-145659"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 17:46:13.313377  216535 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 17:46:13.323129  216535 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 17:46:13.323196  216535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 17:46:13.332349  216535 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0120 17:46:13.351092  216535 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 17:46:13.370101  216535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0120 17:46:13.389638  216535 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0120 17:46:13.393251  216535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 17:46:13.405001  216535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 17:46:13.518817  216535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 17:46:13.535456  216535 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659 for IP: 192.168.76.2
	I0120 17:46:13.535481  216535 certs.go:194] generating shared ca certs ...
	I0120 17:46:13.535499  216535 certs.go:226] acquiring lock for ca certs: {Name:mk409d9cbe30328f0e66b0d712629bd4b02b995b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:46:13.535636  216535 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key
	I0120 17:46:13.535683  216535 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key
	I0120 17:46:13.535696  216535 certs.go:256] generating profile certs ...
	I0120 17:46:13.535789  216535 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.key
	I0120 17:46:13.535859  216535 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/apiserver.key.4fd2295c
	I0120 17:46:13.535906  216535 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/proxy-client.key
	I0120 17:46:13.536030  216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem (1338 bytes)
	W0120 17:46:13.536064  216535 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844_empty.pem, impossibly tiny 0 bytes
	I0120 17:46:13.536077  216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 17:46:13.536101  216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem (1082 bytes)
	I0120 17:46:13.536127  216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem (1123 bytes)
	I0120 17:46:13.536153  216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem (1679 bytes)
	I0120 17:46:13.536197  216535 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem (1708 bytes)
	I0120 17:46:13.536808  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 17:46:13.564864  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 17:46:13.594949  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 17:46:13.621901  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 17:46:13.649436  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 17:46:13.676838  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 17:46:13.702488  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 17:46:13.777588  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 17:46:13.835573  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 17:46:13.861877  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem --> /usr/share/ca-certificates/7844.pem (1338 bytes)
	I0120 17:46:13.889994  216535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /usr/share/ca-certificates/78442.pem (1708 bytes)
	I0120 17:46:13.925019  216535 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 17:46:13.952492  216535 ssh_runner.go:195] Run: openssl version
	I0120 17:46:13.958045  216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 17:46:13.982563  216535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 17:46:13.990668  216535 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0120 17:46:13.990735  216535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 17:46:13.998119  216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 17:46:14.007920  216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7844.pem && ln -fs /usr/share/ca-certificates/7844.pem /etc/ssl/certs/7844.pem"
	I0120 17:46:14.017993  216535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7844.pem
	I0120 17:46:14.021861  216535 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:06 /usr/share/ca-certificates/7844.pem
	I0120 17:46:14.021926  216535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7844.pem
	I0120 17:46:14.028914  216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7844.pem /etc/ssl/certs/51391683.0"
	I0120 17:46:14.038066  216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78442.pem && ln -fs /usr/share/ca-certificates/78442.pem /etc/ssl/certs/78442.pem"
	I0120 17:46:14.048940  216535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78442.pem
	I0120 17:46:14.052695  216535 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 17:06 /usr/share/ca-certificates/78442.pem
	I0120 17:46:14.052764  216535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78442.pem
	I0120 17:46:14.059923  216535 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78442.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 17:46:14.069343  216535 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 17:46:14.072975  216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 17:46:14.083721  216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 17:46:14.091131  216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 17:46:14.098125  216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 17:46:14.105448  216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 17:46:14.112727  216535 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 17:46:14.119900  216535 kubeadm.go:392] StartCluster: {Name:old-k8s-version-145659 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-145659 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:46:14.119998  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 17:46:14.120061  216535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 17:46:14.177551  216535 cri.go:89] found id: "b5b9683544505314d199b518eecbc67e62715b40df7019ff4891e9a38610f476"
	I0120 17:46:14.177584  216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:46:14.177598  216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:46:14.177602  216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:46:14.177606  216535 cri.go:89] found id: "bdf7abdba408a785c8e38f1cfe1b17928b77ea83bb630a565d01e897434779c3"
	I0120 17:46:14.177610  216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:46:14.177616  216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:46:14.177626  216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:46:14.177633  216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:46:14.177639  216535 cri.go:89] found id: ""
	I0120 17:46:14.177690  216535 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 17:46:14.190780  216535 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T17:46:14Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 17:46:14.190859  216535 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 17:46:14.201596  216535 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 17:46:14.201616  216535 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 17:46:14.201668  216535 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 17:46:14.212505  216535 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 17:46:14.212997  216535 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-145659" does not appear in /home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:46:14.213140  216535 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2518/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-145659" cluster setting kubeconfig missing "old-k8s-version-145659" context setting]
	I0120 17:46:14.213443  216535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:46:14.214723  216535 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 17:46:14.225046  216535 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0120 17:46:14.225097  216535 kubeadm.go:597] duration metric: took 23.47144ms to restartPrimaryControlPlane
	I0120 17:46:14.225110  216535 kubeadm.go:394] duration metric: took 105.219257ms to StartCluster
	I0120 17:46:14.225135  216535 settings.go:142] acquiring lock: {Name:mk1c7d255bd6ff729fb7f0cda8440d084eb0c286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:46:14.225216  216535 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:46:14.225948  216535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:46:14.226201  216535 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 17:46:14.226540  216535 config.go:182] Loaded profile config "old-k8s-version-145659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 17:46:14.226587  216535 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 17:46:14.226677  216535 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-145659"
	I0120 17:46:14.226697  216535 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-145659"
	W0120 17:46:14.226709  216535 addons.go:247] addon storage-provisioner should already be in state true
	I0120 17:46:14.226740  216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
	I0120 17:46:14.227880  216535 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-145659"
	I0120 17:46:14.227904  216535 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-145659"
	I0120 17:46:14.227944  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:14.228200  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:14.228540  216535 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-145659"
	I0120 17:46:14.228573  216535 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-145659"
	W0120 17:46:14.228609  216535 addons.go:247] addon metrics-server should already be in state true
	I0120 17:46:14.228647  216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
	I0120 17:46:14.229117  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:14.230732  216535 addons.go:69] Setting dashboard=true in profile "old-k8s-version-145659"
	I0120 17:46:14.230766  216535 addons.go:238] Setting addon dashboard=true in "old-k8s-version-145659"
	W0120 17:46:14.230773  216535 addons.go:247] addon dashboard should already be in state true
	I0120 17:46:14.230801  216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
	I0120 17:46:14.231460  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:14.231853  216535 out.go:177] * Verifying Kubernetes components...
	I0120 17:46:14.234732  216535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 17:46:14.291391  216535 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 17:46:14.293958  216535 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 17:46:14.294000  216535 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 17:46:14.294069  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:14.319127  216535 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-145659"
	W0120 17:46:14.319152  216535 addons.go:247] addon default-storageclass should already be in state true
	I0120 17:46:14.319177  216535 host.go:66] Checking if "old-k8s-version-145659" exists ...
	I0120 17:46:14.323918  216535 cli_runner.go:164] Run: docker container inspect old-k8s-version-145659 --format={{.State.Status}}
	I0120 17:46:14.327122  216535 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 17:46:14.327289  216535 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 17:46:14.329920  216535 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 17:46:14.330281  216535 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:46:14.330307  216535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 17:46:14.330378  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:14.333464  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 17:46:14.333487  216535 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 17:46:14.333554  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:14.383669  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:14.397086  216535 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 17:46:14.397107  216535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 17:46:14.397167  216535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-145659
	I0120 17:46:14.403615  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:14.405758  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:14.433152  216535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/old-k8s-version-145659/id_rsa Username:docker}
	I0120 17:46:14.462735  216535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 17:46:14.502474  216535 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-145659" to be "Ready" ...
	I0120 17:46:14.592513  216535 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 17:46:14.592536  216535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 17:46:14.629513  216535 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 17:46:14.629599  216535 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 17:46:14.673229  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 17:46:14.698415  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 17:46:14.698499  216535 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 17:46:14.702688  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:46:14.723082  216535 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 17:46:14.723162  216535 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 17:46:14.777166  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 17:46:14.777264  216535 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 17:46:14.832906  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 17:46:14.928699  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 17:46:14.928790  216535 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0120 17:46:15.076953  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:15.077058  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.077113  216535 retry.go:31] will retry after 304.037714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.077156  216535 retry.go:31] will retry after 144.986778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.079058  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 17:46:15.079132  216535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 17:46:15.126780  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 17:46:15.126863  216535 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0120 17:46:15.173200  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.173296  216535 retry.go:31] will retry after 212.361676ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.186114  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 17:46:15.186193  216535 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 17:46:15.209757  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 17:46:15.209832  216535 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 17:46:15.223066  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 17:46:15.239112  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 17:46:15.239189  216535 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 17:46:15.283855  216535 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:46:15.283929  216535 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 17:46:15.316443  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:46:15.381631  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:46:15.385921  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 17:46:15.475534  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.475565  216535 retry.go:31] will retry after 264.217766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:15.485096  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.485127  216535 retry.go:31] will retry after 292.160269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:15.582210  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.582243  216535 retry.go:31] will retry after 356.191953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:15.638795  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.638828  216535 retry.go:31] will retry after 369.440037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.740173  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 17:46:15.777647  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 17:46:15.853879  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.853958  216535 retry.go:31] will retry after 804.813849ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:15.914178  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.914255  216535 retry.go:31] will retry after 503.149977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:15.938586  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:46:16.008458  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 17:46:16.042488  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.042521  216535 retry.go:31] will retry after 646.854109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:16.135647  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.135679  216535 retry.go:31] will retry after 537.353244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.417984  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:46:16.503765  216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
	W0120 17:46:16.561798  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.561878  216535 retry.go:31] will retry after 361.333645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.659121  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 17:46:16.673461  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 17:46:16.689816  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 17:46:16.872017  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.872104  216535 retry.go:31] will retry after 1.164701291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:16.913870  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.913947  216535 retry.go:31] will retry after 864.208742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:16.913974  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.913994  216535 retry.go:31] will retry after 757.965934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:16.924138  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 17:46:17.026348  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:17.026377  216535 retry.go:31] will retry after 641.604695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:17.668523  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:46:17.672939  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:46:17.779299  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 17:46:17.840675  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:17.840708  216535 retry.go:31] will retry after 717.030986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:17.870257  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:17.870289  216535 retry.go:31] will retry after 1.587737674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:17.955559  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:17.955594  216535 retry.go:31] will retry after 1.255434296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:18.037457  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 17:46:18.157248  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:18.157278  216535 retry.go:31] will retry after 738.912551ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:18.558229  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 17:46:18.685255  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:18.685290  216535 retry.go:31] will retry after 1.664066253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:18.896525  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 17:46:18.999934  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:18.999963  216535 retry.go:31] will retry after 1.375916985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:19.003602  216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
	I0120 17:46:19.212030  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 17:46:19.310268  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:19.310298  216535 retry.go:31] will retry after 2.229721873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:19.458563  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 17:46:19.578564  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:19.578595  216535 retry.go:31] will retry after 1.790748201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:20.350222  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:46:20.376505  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 17:46:20.570451  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:20.570482  216535 retry.go:31] will retry after 1.618998146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 17:46:20.586593  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:20.586642  216535 retry.go:31] will retry after 2.937473882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:21.003780  216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
	I0120 17:46:21.370420  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 17:46:21.488732  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:21.488773  216535 retry.go:31] will retry after 1.507248326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:21.540577  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 17:46:21.674767  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:21.674798  216535 retry.go:31] will retry after 2.288869555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:22.189896  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 17:46:22.363816  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:22.363847  216535 retry.go:31] will retry after 5.437445769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:22.996838  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 17:46:23.209856  216535 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:23.209888  216535 retry.go:31] will retry after 6.351708828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 17:46:23.503471  216535 node_ready.go:53] error getting node "old-k8s-version-145659": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-145659": dial tcp 192.168.76.2:8443: connect: connection refused
	I0120 17:46:23.524698  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 17:46:23.964485  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 17:46:27.802974  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:46:29.561779  216535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:46:33.262437  216535 node_ready.go:49] node "old-k8s-version-145659" has status "Ready":"True"
	I0120 17:46:33.262473  216535 node_ready.go:38] duration metric: took 18.759947561s for node "old-k8s-version-145659" to be "Ready" ...
	I0120 17:46:33.262485  216535 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 17:46:33.561719  216535 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-gtjp2" in "kube-system" namespace to be "Ready" ...
	I0120 17:46:33.697509  216535 pod_ready.go:93] pod "coredns-74ff55c5b-gtjp2" in "kube-system" namespace has status "Ready":"True"
	I0120 17:46:33.697535  216535 pod_ready.go:82] duration metric: took 135.786323ms for pod "coredns-74ff55c5b-gtjp2" in "kube-system" namespace to be "Ready" ...
	I0120 17:46:33.697547  216535 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:46:34.769152  216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.24441767s)
	I0120 17:46:34.844139  216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.879608142s)
	I0120 17:46:34.844203  216535 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-145659"
	I0120 17:46:35.375442  216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.572421593s)
	I0120 17:46:35.375684  216535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.8138478s)
	I0120 17:46:35.378616  216535 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-145659 addons enable metrics-server
	
	I0120 17:46:35.381648  216535 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0120 17:46:35.384693  216535 addons.go:514] duration metric: took 21.158081764s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0120 17:46:35.704216  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:37.704836  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:40.210059  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:42.704010  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:44.704359  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:47.204306  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:49.204610  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:51.205222  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:53.245457  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:55.704390  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:46:57.720388  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:00.233791  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:02.703201  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:04.706343  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:07.204424  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:09.204459  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:11.205114  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:13.207455  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:15.703999  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:18.206172  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:20.703848  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:22.705047  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:25.203868  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:27.204013  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:29.204819  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:31.205720  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:33.704553  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:36.204347  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:38.204461  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:40.703783  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:42.704192  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:44.704169  216535 pod_ready.go:93] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:44.704198  216535 pod_ready.go:82] duration metric: took 1m11.006642524s for pod "etcd-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:44.704215  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:44.709524  216535 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:44.709549  216535 pod_ready.go:82] duration metric: took 5.326555ms for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:44.709561  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:46.720597  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:49.218373  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:51.220240  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:53.731463  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:56.219975  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:58.715969  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:00.716172  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:01.717465  216535 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:48:01.717491  216535 pod_ready.go:82] duration metric: took 17.007921004s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.717503  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.723321  216535 pod_ready.go:93] pod "kube-proxy-mxqgj" in "kube-system" namespace has status "Ready":"True"
	I0120 17:48:01.723396  216535 pod_ready.go:82] duration metric: took 5.87229ms for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.723409  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.729329  216535 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:48:01.729356  216535 pod_ready.go:82] duration metric: took 5.938522ms for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.729367  216535 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:03.811502  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:06.253893  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:08.739025  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:11.239058  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:13.736337  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:15.736465  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:18.247835  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:20.747201  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:23.242774  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:25.735545  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:27.736290  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:30.243746  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:32.737127  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:34.737472  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:37.243570  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:39.245847  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:41.736938  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:44.242652  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:46.736378  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:49.243543  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:51.243642  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:53.244529  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:55.245129  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:57.736190  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:00.244816  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:02.245766  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:04.295578  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:06.736622  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:08.737036  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:10.737207  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:13.242704  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:15.735684  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:17.737791  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:20.244523  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:22.244600  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:24.735659  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:26.736790  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:29.250223  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:31.753850  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:34.236244  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:36.243241  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:38.736581  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:40.736828  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:43.238843  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:45.736169  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:47.736599  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:50.244905  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:52.737487  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:54.754561  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:57.236641  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:59.238929  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:01.241741  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:03.242427  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:05.736193  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:07.736416  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:10.240839  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:12.244010  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:14.246547  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:16.737206  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:19.244440  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:21.244729  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:23.736600  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:26.244612  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:28.250474  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:30.739819  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:33.245363  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:35.737773  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:37.742221  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:40.237488  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:42.738257  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:45.239382  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:47.736272  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:50.236202  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:52.239206  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:54.244758  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:56.736346  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:59.237672  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:01.244367  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:03.736783  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:05.737354  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:08.235650  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:10.237001  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:12.237848  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:14.240863  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:16.243349  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:18.737611  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:21.244639  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:23.735945  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:26.242287  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:28.735482  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:30.736321  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:32.736991  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:35.236754  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:37.244823  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:39.735311  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:41.735810  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:43.736169  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:45.742400  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:48.243218  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:50.244231  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:52.244707  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:54.248009  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:56.737674  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:59.241838  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:52:01.244283  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:52:01.736777  216535 pod_ready.go:82] duration metric: took 4m0.007395127s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
	E0120 17:52:01.736846  216535 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 17:52:01.736870  216535 pod_ready.go:39] duration metric: took 5m28.474374205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 17:52:01.736899  216535 api_server.go:52] waiting for apiserver process to appear ...
	I0120 17:52:01.736964  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:52:01.737053  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:52:01.781253  216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:01.781321  216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:01.781341  216535 cri.go:89] found id: ""
	I0120 17:52:01.781356  216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
	I0120 17:52:01.781432  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.785393  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.788792  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:52:01.788862  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:52:01.833834  216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:01.833869  216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:01.833902  216535 cri.go:89] found id: ""
	I0120 17:52:01.833910  216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
	I0120 17:52:01.833990  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.838990  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.843467  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:52:01.843556  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:52:01.886764  216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:01.886856  216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:01.886877  216535 cri.go:89] found id: ""
	I0120 17:52:01.886908  216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
	I0120 17:52:01.886983  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.891011  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.894775  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:52:01.894856  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:52:01.949896  216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:01.949920  216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:01.949925  216535 cri.go:89] found id: ""
	I0120 17:52:01.949933  216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
	I0120 17:52:01.949992  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.954296  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.958371  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:52:01.958506  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:52:02.018621  216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:02.018645  216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:02.018650  216535 cri.go:89] found id: ""
	I0120 17:52:02.018657  216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
	I0120 17:52:02.018714  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.023690  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.028696  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:52:02.028860  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:52:02.096051  216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:02.096073  216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:02.096078  216535 cri.go:89] found id: ""
	I0120 17:52:02.096085  216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
	I0120 17:52:02.096149  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.100993  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.106917  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:52:02.106990  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:52:02.174049  216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:02.174080  216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:02.174086  216535 cri.go:89] found id: ""
	I0120 17:52:02.174093  216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
	I0120 17:52:02.174145  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.179127  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.184826  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:52:02.184901  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:52:02.254018  216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:02.254041  216535 cri.go:89] found id: ""
	I0120 17:52:02.254049  216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
	I0120 17:52:02.254122  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.260217  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:52:02.260276  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:52:02.316256  216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:02.316280  216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:02.316286  216535 cri.go:89] found id: ""
	I0120 17:52:02.316293  216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
	I0120 17:52:02.316352  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.321766  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.327502  216535 logs.go:123] Gathering logs for dmesg ...
	I0120 17:52:02.327525  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:52:02.343747  216535 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:52:02.343778  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:02.674989  216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
	I0120 17:52:02.675019  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:02.739409  216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
	I0120 17:52:02.739429  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:02.805987  216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
	I0120 17:52:02.806072  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:02.862091  216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
	I0120 17:52:02.862117  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:02.952148  216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
	I0120 17:52:02.952223  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:03.020765  216535 logs.go:123] Gathering logs for container status ...
	I0120 17:52:03.020815  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:52:03.090382  216535 logs.go:123] Gathering logs for kubelet ...
	I0120 17:52:03.090580  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 17:52:03.161589  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.161853  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.165125  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.167727  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.167958  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.168311  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.168784  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644     662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
	W0120 17:52:03.169139  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.170224  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.172926  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.173303  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.173514  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.173865  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.174073  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.174688  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.175052  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.175261  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.175632  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.178118  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.178583  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.178770  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.179381  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.179564  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.179889  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.180070  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.180396  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.180579  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.180905  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.181086  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.181407  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.181587  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.181909  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.184453  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.184816  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.185031  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.185681  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.185896  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.186251  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.186463  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.186830  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.187051  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.187407  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.187689  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.188047  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.188255  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.188613  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.188828  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.189195  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.189403  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.189758  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.189969  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.190324  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.190536  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.190894  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:03.190919  216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
	I0120 17:52:03.190947  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:03.259910  216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
	I0120 17:52:03.259991  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:03.317942  216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
	I0120 17:52:03.318013  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:03.380525  216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
	I0120 17:52:03.380608  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:03.453396  216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
	I0120 17:52:03.453442  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:03.506945  216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
	I0120 17:52:03.506974  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:03.555548  216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
	I0120 17:52:03.555628  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:03.674894  216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
	I0120 17:52:03.674971  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:03.746584  216535 logs.go:123] Gathering logs for containerd ...
	I0120 17:52:03.746608  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:52:03.830076  216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
	I0120 17:52:03.830148  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:03.938308  216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
	I0120 17:52:03.938397  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:04.023242  216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
	I0120 17:52:04.023376  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:04.093186  216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
	I0120 17:52:04.093218  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:04.203549  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:04.203705  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 17:52:04.203798  216535 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0120 17:52:04.203843  216535 out.go:270]   Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	  Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:04.203889  216535 out.go:270]   Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:04.203925  216535 out.go:270]   Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	  Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:04.203955  216535 out.go:270]   Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:04.203988  216535 out.go:270]   Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	  Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:04.204019  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:04.204048  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:52:14.204540  216535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 17:52:14.216885  216535 api_server.go:72] duration metric: took 5m59.990640844s to wait for apiserver process to appear ...
	I0120 17:52:14.216913  216535 api_server.go:88] waiting for apiserver healthz status ...
	I0120 17:52:14.216952  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:52:14.217012  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:52:14.275816  216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:14.275838  216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:14.275843  216535 cri.go:89] found id: ""
	I0120 17:52:14.275850  216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
	I0120 17:52:14.275981  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.280911  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.284620  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:52:14.284694  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:52:14.324506  216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:14.324530  216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:14.324536  216535 cri.go:89] found id: ""
	I0120 17:52:14.324544  216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
	I0120 17:52:14.324602  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.328307  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.331742  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:52:14.331812  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:52:14.375892  216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:14.375913  216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:14.375919  216535 cri.go:89] found id: ""
	I0120 17:52:14.375926  216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
	I0120 17:52:14.376011  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.379798  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.383248  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:52:14.383317  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:52:14.431319  216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:14.431376  216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:14.431382  216535 cri.go:89] found id: ""
	I0120 17:52:14.431388  216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
	I0120 17:52:14.431444  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.435015  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.438536  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:52:14.438604  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:52:14.483659  216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:14.483691  216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:14.483697  216535 cri.go:89] found id: ""
	I0120 17:52:14.483703  216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
	I0120 17:52:14.483778  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.487550  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.491261  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:52:14.491399  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:52:14.537554  216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:14.537574  216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:14.537580  216535 cri.go:89] found id: ""
	I0120 17:52:14.537587  216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
	I0120 17:52:14.537645  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.541369  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.544958  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:52:14.545047  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:52:14.582569  216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:14.582592  216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:14.582598  216535 cri.go:89] found id: ""
	I0120 17:52:14.582605  216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
	I0120 17:52:14.582683  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.586500  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.590053  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:52:14.590126  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:52:14.663263  216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:14.663283  216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:14.663289  216535 cri.go:89] found id: ""
	I0120 17:52:14.663296  216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
	I0120 17:52:14.663372  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.666867  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.672075  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:52:14.672174  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:52:14.720019  216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:14.720042  216535 cri.go:89] found id: ""
	I0120 17:52:14.720054  216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
	I0120 17:52:14.720116  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.723774  216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
	I0120 17:52:14.723800  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:14.773380  216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
	I0120 17:52:14.773417  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:14.816814  216535 logs.go:123] Gathering logs for kubelet ...
	I0120 17:52:14.816842  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 17:52:14.876608  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.876839  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.879700  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.883739  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.883950  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.884282  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.884720  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644     662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
	W0120 17:52:14.885047  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.886100  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.888645  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.889002  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.889194  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.889559  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.889746  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.890333  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.890660  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.890848  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.891179  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.893674  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.894035  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.894400  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.895006  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.895192  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.895564  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.895751  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.896077  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.896260  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.896584  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.896768  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.897094  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.897306  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.897633  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.900069  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.900399  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.900588  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.901175  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.901358  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.901683  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.901866  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.902191  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.902379  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.902706  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.902892  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.903219  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.903413  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.903739  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.903923  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.904249  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.904433  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.904758  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.904944  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.905272  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.905457  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.905785  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.905970  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.906299  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:14.906310  216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
	I0120 17:52:14.906325  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:14.972580  216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
	I0120 17:52:14.972618  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:15.024121  216535 logs.go:123] Gathering logs for containerd ...
	I0120 17:52:15.024165  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:52:15.100734  216535 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:52:15.100774  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:15.284993  216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
	I0120 17:52:15.285026  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:15.335235  216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
	I0120 17:52:15.335264  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:15.374772  216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
	I0120 17:52:15.374806  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:15.433634  216535 logs.go:123] Gathering logs for container status ...
	I0120 17:52:15.433663  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:52:15.488059  216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
	I0120 17:52:15.488091  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:15.542254  216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
	I0120 17:52:15.542284  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:15.582486  216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
	I0120 17:52:15.582513  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:15.660944  216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
	I0120 17:52:15.661023  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:15.709672  216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
	I0120 17:52:15.709763  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:15.755613  216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
	I0120 17:52:15.755647  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:15.794100  216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
	I0120 17:52:15.794126  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:15.876898  216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
	I0120 17:52:15.876935  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:15.937814  216535 logs.go:123] Gathering logs for dmesg ...
	I0120 17:52:15.937842  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:52:15.955450  216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
	I0120 17:52:15.955481  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:16.047655  216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
	I0120 17:52:16.047691  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:16.094113  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:16.094145  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 17:52:16.094250  216535 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0120 17:52:16.094269  216535 out.go:270]   Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	  Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:16.094283  216535 out.go:270]   Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:16.094294  216535 out.go:270]   Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	  Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:16.094301  216535 out.go:270]   Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:16.094307  216535 out.go:270]   Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	  Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:16.094313  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:16.094320  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:52:26.095908  216535 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0120 17:52:26.165226  216535 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0120 17:52:26.168436  216535 out.go:201] 
	W0120 17:52:26.171235  216535 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0120 17:52:26.171279  216535 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0120 17:52:26.171300  216535 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0120 17:52:26.171306  216535 out.go:270] * 
	* 
	W0120 17:52:26.172503  216535 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 17:52:26.175703  216535 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-145659
helpers_test.go:235: (dbg) docker inspect old-k8s-version-145659:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2",
	        "Created": "2025-01-20T17:43:12.54738171Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216800,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-20T17:46:06.431464742Z",
	            "FinishedAt": "2025-01-20T17:46:05.303575018Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/hosts",
	        "LogPath": "/var/lib/docker/containers/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2/68f5886dcfe3292bc3afa4a7871d063af108b7b91be03cbbad2d302680b280b2-json.log",
	        "Name": "/old-k8s-version-145659",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-145659:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-145659",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59-init/diff:/var/lib/docker/overlay2/9b176083dace6a900153a2b6e94fac06a5680ba9c3cc84680719d1cb51350052/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dbca99fa5f002a2c275f8a6dcc3ecf11fb45907a2a29cad3b384ad85f73eae59/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-145659",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-145659/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-145659",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-145659",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-145659",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9624f41dcb0188ab87b3e9fd8b2712388e69a5f22bcce69e5a74569b21564d0",
	            "SandboxKey": "/var/run/docker/netns/a9624f41dcb0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-145659": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "86eb91404aba21fb833837cdd78311917e6f4c87eaa2c3ae30f9551926747b07",
	                    "EndpointID": "616b0be2d30f8285abb9cde0029dfeba76553216f02ad6ad743bcf537f05547a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-145659",
	                        "68f5886dcfe3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145659 -n old-k8s-version-145659
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-145659 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-145659 logs -n 25: (3.015713004s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-832515                              | force-systemd-flag-832515 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-832515                           | force-systemd-flag-832515 | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
	| start   | -p cert-expiration-156373                              | cert-expiration-156373    | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-062715                               | force-systemd-env-062715  | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-062715                            | force-systemd-env-062715  | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:42 UTC |
	| start   | -p cert-options-779915                                 | cert-options-779915       | jenkins | v1.35.0 | 20 Jan 25 17:42 UTC | 20 Jan 25 17:43 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-779915 ssh                                | cert-options-779915       | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:43 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-779915 -- sudo                         | cert-options-779915       | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:43 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-779915                                 | cert-options-779915       | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:43 UTC |
	| start   | -p old-k8s-version-145659                              | old-k8s-version-145659    | jenkins | v1.35.0 | 20 Jan 25 17:43 UTC | 20 Jan 25 17:45 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-156373                              | cert-expiration-156373    | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:45 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-145659        | old-k8s-version-145659    | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-145659                              | old-k8s-version-145659    | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:46 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-156373                              | cert-expiration-156373    | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:45 UTC |
	| start   | -p embed-certs-698725                                  | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:45 UTC | 20 Jan 25 17:47 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-145659             | old-k8s-version-145659    | jenkins | v1.35.0 | 20 Jan 25 17:46 UTC | 20 Jan 25 17:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-145659                              | old-k8s-version-145659    | jenkins | v1.35.0 | 20 Jan 25 17:46 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-698725            | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-698725                                  | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:47 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-698725                 | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-698725                                  | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:47 UTC | 20 Jan 25 17:52 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                           |         |         |                     |                     |
	| image   | embed-certs-698725 image list                          | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | 20 Jan 25 17:52 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p embed-certs-698725                                  | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | 20 Jan 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p embed-certs-698725                                  | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC | 20 Jan 25 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p embed-certs-698725                                  | embed-certs-698725        | jenkins | v1.35.0 | 20 Jan 25 17:52 UTC |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 17:47:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 17:47:43.016426  222240 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:47:43.016699  222240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:47:43.016873  222240 out.go:358] Setting ErrFile to fd 2...
	I0120 17:47:43.016886  222240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:47:43.017221  222240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:47:43.017747  222240 out.go:352] Setting JSON to false
	I0120 17:47:43.019057  222240 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5407,"bootTime":1737389856,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 17:47:43.019215  222240 start.go:139] virtualization:  
	I0120 17:47:43.022633  222240 out.go:177] * [embed-certs-698725] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 17:47:43.026419  222240 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 17:47:43.026536  222240 notify.go:220] Checking for updates...
	I0120 17:47:43.032725  222240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 17:47:43.035753  222240 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:47:43.038581  222240 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 17:47:43.041469  222240 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 17:47:43.044258  222240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 17:47:43.047901  222240 config.go:182] Loaded profile config "embed-certs-698725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:47:43.048455  222240 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 17:47:43.070041  222240 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 17:47:43.070174  222240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:47:43.137590  222240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 17:47:43.128169818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:47:43.137700  222240 docker.go:318] overlay module found
	I0120 17:47:43.140983  222240 out.go:177] * Using the docker driver based on existing profile
	I0120 17:47:43.143779  222240 start.go:297] selected driver: docker
	I0120 17:47:43.143802  222240 start.go:901] validating driver "docker" against &{Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:47:43.143924  222240 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 17:47:43.144640  222240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:47:43.200514  222240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 17:47:43.189158118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:47:43.202585  222240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 17:47:43.202619  222240 cni.go:84] Creating CNI manager for ""
	I0120 17:47:43.202681  222240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 17:47:43.202718  222240 start.go:340] cluster config:
	{Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:47:43.206163  222240 out.go:177] * Starting "embed-certs-698725" primary control-plane node in "embed-certs-698725" cluster
	I0120 17:47:43.209083  222240 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 17:47:43.212024  222240 out.go:177] * Pulling base image v0.0.46 ...
	I0120 17:47:43.214886  222240 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 17:47:43.214948  222240 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I0120 17:47:43.214960  222240 cache.go:56] Caching tarball of preloaded images
	I0120 17:47:43.214989  222240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 17:47:43.215097  222240 preload.go:172] Found /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0120 17:47:43.215109  222240 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 17:47:43.215239  222240 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/config.json ...
	I0120 17:47:43.246059  222240 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0120 17:47:43.246079  222240 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0120 17:47:43.246103  222240 cache.go:227] Successfully downloaded all kic artifacts
	I0120 17:47:43.246136  222240 start.go:360] acquireMachinesLock for embed-certs-698725: {Name:mkb9032627882ab94f8c709279fd09e6fbf6e44e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 17:47:43.246201  222240 start.go:364] duration metric: took 47.696µs to acquireMachinesLock for "embed-certs-698725"
	I0120 17:47:43.246222  222240 start.go:96] Skipping create...Using existing machine configuration
	I0120 17:47:43.246227  222240 fix.go:54] fixHost starting: 
	I0120 17:47:43.246483  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:43.264178  222240 fix.go:112] recreateIfNeeded on embed-certs-698725: state=Stopped err=<nil>
	W0120 17:47:43.264213  222240 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 17:47:43.267550  222240 out.go:177] * Restarting existing docker container for "embed-certs-698725" ...
	I0120 17:47:42.704192  216535 pod_ready.go:103] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:44.704169  216535 pod_ready.go:93] pod "etcd-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:44.704198  216535 pod_ready.go:82] duration metric: took 1m11.006642524s for pod "etcd-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:44.704215  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:44.709524  216535 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:44.709549  216535 pod_ready.go:82] duration metric: took 5.326555ms for pod "kube-apiserver-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:44.709561  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:43.270488  222240 cli_runner.go:164] Run: docker start embed-certs-698725
	I0120 17:47:43.601651  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:43.626806  222240 kic.go:430] container "embed-certs-698725" state is running.
	I0120 17:47:43.627311  222240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-698725
	I0120 17:47:43.653895  222240 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/config.json ...
	I0120 17:47:43.654118  222240 machine.go:93] provisionDockerMachine start ...
	I0120 17:47:43.654179  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:43.680957  222240 main.go:141] libmachine: Using SSH client type: native
	I0120 17:47:43.681264  222240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0120 17:47:43.681274  222240 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 17:47:43.682064  222240 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0120 17:47:46.806927  222240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-698725
	
	I0120 17:47:46.806958  222240 ubuntu.go:169] provisioning hostname "embed-certs-698725"
	I0120 17:47:46.807021  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:46.824726  222240 main.go:141] libmachine: Using SSH client type: native
	I0120 17:47:46.825025  222240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0120 17:47:46.825046  222240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-698725 && echo "embed-certs-698725" | sudo tee /etc/hostname
	I0120 17:47:46.968695  222240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-698725
	
	I0120 17:47:46.968778  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:46.987494  222240 main.go:141] libmachine: Using SSH client type: native
	I0120 17:47:46.987771  222240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0120 17:47:46.987794  222240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-698725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-698725/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-698725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 17:47:47.111545  222240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 17:47:47.111570  222240 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2518/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2518/.minikube}
	I0120 17:47:47.111606  222240 ubuntu.go:177] setting up certificates
	I0120 17:47:47.111616  222240 provision.go:84] configureAuth start
	I0120 17:47:47.111686  222240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-698725
	I0120 17:47:47.128644  222240 provision.go:143] copyHostCerts
	I0120 17:47:47.128713  222240 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem, removing ...
	I0120 17:47:47.128726  222240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem
	I0120 17:47:47.128800  222240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/ca.pem (1082 bytes)
	I0120 17:47:47.128895  222240 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem, removing ...
	I0120 17:47:47.128905  222240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem
	I0120 17:47:47.128931  222240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/cert.pem (1123 bytes)
	I0120 17:47:47.128987  222240 exec_runner.go:144] found /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem, removing ...
	I0120 17:47:47.128996  222240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem
	I0120 17:47:47.129025  222240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2518/.minikube/key.pem (1679 bytes)
	I0120 17:47:47.129078  222240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem org=jenkins.embed-certs-698725 san=[127.0.0.1 192.168.85.2 embed-certs-698725 localhost minikube]
	I0120 17:47:48.350585  222240 provision.go:177] copyRemoteCerts
	I0120 17:47:48.350662  222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 17:47:48.350704  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:48.368996  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:48.461392  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 17:47:48.495521  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 17:47:48.534237  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 17:47:48.566492  222240 provision.go:87] duration metric: took 1.454844886s to configureAuth
	I0120 17:47:48.566568  222240 ubuntu.go:193] setting minikube options for container-runtime
	I0120 17:47:48.566803  222240 config.go:182] Loaded profile config "embed-certs-698725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:47:48.566833  222240 machine.go:96] duration metric: took 4.912698605s to provisionDockerMachine
	I0120 17:47:48.566854  222240 start.go:293] postStartSetup for "embed-certs-698725" (driver="docker")
	I0120 17:47:48.566876  222240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 17:47:48.566952  222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 17:47:48.567010  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:48.585796  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:48.681128  222240 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 17:47:48.684731  222240 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 17:47:48.684812  222240 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 17:47:48.684830  222240 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 17:47:48.684838  222240 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 17:47:48.684849  222240 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/addons for local assets ...
	I0120 17:47:48.684908  222240 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2518/.minikube/files for local assets ...
	I0120 17:47:48.684999  222240 filesync.go:149] local asset: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem -> 78442.pem in /etc/ssl/certs
	I0120 17:47:48.685116  222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 17:47:48.694367  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /etc/ssl/certs/78442.pem (1708 bytes)
	I0120 17:47:48.724378  222240 start.go:296] duration metric: took 157.498225ms for postStartSetup
	I0120 17:47:48.724505  222240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:47:48.724598  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:48.741729  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:48.829412  222240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 17:47:48.834247  222240 fix.go:56] duration metric: took 5.588011932s for fixHost
	I0120 17:47:48.834285  222240 start.go:83] releasing machines lock for "embed-certs-698725", held for 5.588062148s
	I0120 17:47:48.834362  222240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-698725
	I0120 17:47:48.856790  222240 ssh_runner.go:195] Run: cat /version.json
	I0120 17:47:48.856846  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:48.856791  222240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 17:47:48.857002  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:48.902458  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:48.902714  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:49.139656  222240 ssh_runner.go:195] Run: systemctl --version
	I0120 17:47:49.144292  222240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 17:47:49.148783  222240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0120 17:47:49.167194  222240 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0120 17:47:49.167276  222240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 17:47:49.176321  222240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 17:47:49.176357  222240 start.go:495] detecting cgroup driver to use...
	I0120 17:47:49.176389  222240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 17:47:49.176441  222240 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 17:47:49.191275  222240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 17:47:49.203898  222240 docker.go:217] disabling cri-docker service (if available) ...
	I0120 17:47:49.203967  222240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 17:47:49.219102  222240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 17:47:49.232158  222240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 17:47:49.326287  222240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 17:47:49.414084  222240 docker.go:233] disabling docker service ...
	I0120 17:47:49.414159  222240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 17:47:49.428166  222240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 17:47:49.440037  222240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 17:47:49.544699  222240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 17:47:49.624991  222240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 17:47:49.636913  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 17:47:49.654784  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 17:47:49.665476  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 17:47:49.676944  222240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 17:47:49.677017  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 17:47:49.688153  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 17:47:49.701919  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 17:47:49.719262  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 17:47:49.729492  222240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 17:47:49.739189  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 17:47:49.750351  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 17:47:49.760763  222240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 17:47:49.772111  222240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 17:47:49.782071  222240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 17:47:49.790910  222240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 17:47:49.870602  222240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 17:47:50.055722  222240 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 17:47:50.055853  222240 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 17:47:50.060344  222240 start.go:563] Will wait 60s for crictl version
	I0120 17:47:50.060459  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:47:50.064168  222240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 17:47:50.110865  222240 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0120 17:47:50.110968  222240 ssh_runner.go:195] Run: containerd --version
	I0120 17:47:50.136248  222240 ssh_runner.go:195] Run: containerd --version
	I0120 17:47:50.182377  222240 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
	I0120 17:47:46.720597  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:49.218373  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:50.185254  222240 cli_runner.go:164] Run: docker network inspect embed-certs-698725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 17:47:50.205731  222240 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0120 17:47:50.209580  222240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 17:47:50.222845  222240 kubeadm.go:883] updating cluster {Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 17:47:50.222959  222240 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 17:47:50.223017  222240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 17:47:50.268222  222240 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 17:47:50.268246  222240 containerd.go:534] Images already preloaded, skipping extraction
	I0120 17:47:50.268305  222240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 17:47:50.310524  222240 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 17:47:50.310547  222240 cache_images.go:84] Images are preloaded, skipping loading
	I0120 17:47:50.310556  222240 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.0 containerd true true} ...
	I0120 17:47:50.310697  222240 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-698725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 17:47:50.310768  222240 ssh_runner.go:195] Run: sudo crictl info
	I0120 17:47:50.352774  222240 cni.go:84] Creating CNI manager for ""
	I0120 17:47:50.352796  222240 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 17:47:50.352807  222240 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 17:47:50.352831  222240 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-698725 NodeName:embed-certs-698725 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 17:47:50.352947  222240 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-698725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 17:47:50.353017  222240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 17:47:50.362779  222240 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 17:47:50.362850  222240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 17:47:50.372095  222240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0120 17:47:50.389782  222240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 17:47:50.408019  222240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 17:47:50.427095  222240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0120 17:47:50.430454  222240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 17:47:50.441590  222240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 17:47:50.541022  222240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 17:47:50.563660  222240 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725 for IP: 192.168.85.2
	I0120 17:47:50.563717  222240 certs.go:194] generating shared ca certs ...
	I0120 17:47:50.563737  222240 certs.go:226] acquiring lock for ca certs: {Name:mk409d9cbe30328f0e66b0d712629bd4b02b995b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:47:50.564131  222240 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key
	I0120 17:47:50.564239  222240 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key
	I0120 17:47:50.564270  222240 certs.go:256] generating profile certs ...
	I0120 17:47:50.564516  222240 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/client.key
	I0120 17:47:50.564700  222240 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/apiserver.key.b47539b9
	I0120 17:47:50.564795  222240 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/proxy-client.key
	I0120 17:47:50.565120  222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem (1338 bytes)
	W0120 17:47:50.565208  222240 certs.go:480] ignoring /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844_empty.pem, impossibly tiny 0 bytes
	I0120 17:47:50.565232  222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 17:47:50.565274  222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/ca.pem (1082 bytes)
	I0120 17:47:50.565392  222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/cert.pem (1123 bytes)
	I0120 17:47:50.565485  222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/certs/key.pem (1679 bytes)
	I0120 17:47:50.565823  222240 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem (1708 bytes)
	I0120 17:47:50.566966  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 17:47:50.595967  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 17:47:50.627058  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 17:47:50.655285  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 17:47:50.688377  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 17:47:50.733678  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 17:47:50.773018  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 17:47:50.802874  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/embed-certs-698725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 17:47:50.830674  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 17:47:50.860949  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/certs/7844.pem --> /usr/share/ca-certificates/7844.pem (1338 bytes)
	I0120 17:47:50.888360  222240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/ssl/certs/78442.pem --> /usr/share/ca-certificates/78442.pem (1708 bytes)
	I0120 17:47:50.917473  222240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 17:47:50.936554  222240 ssh_runner.go:195] Run: openssl version
	I0120 17:47:50.944138  222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 17:47:50.954739  222240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 17:47:50.958356  222240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0120 17:47:50.958456  222240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 17:47:50.965820  222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 17:47:50.975019  222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7844.pem && ln -fs /usr/share/ca-certificates/7844.pem /etc/ssl/certs/7844.pem"
	I0120 17:47:50.984902  222240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7844.pem
	I0120 17:47:50.988480  222240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 17:06 /usr/share/ca-certificates/7844.pem
	I0120 17:47:50.988549  222240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7844.pem
	I0120 17:47:50.995758  222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7844.pem /etc/ssl/certs/51391683.0"
	I0120 17:47:51.005635  222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/78442.pem && ln -fs /usr/share/ca-certificates/78442.pem /etc/ssl/certs/78442.pem"
	I0120 17:47:51.016663  222240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/78442.pem
	I0120 17:47:51.020795  222240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 17:06 /usr/share/ca-certificates/78442.pem
	I0120 17:47:51.020869  222240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/78442.pem
	I0120 17:47:51.028452  222240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/78442.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 17:47:51.038125  222240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 17:47:51.042090  222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 17:47:51.049550  222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 17:47:51.057341  222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 17:47:51.064856  222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 17:47:51.072276  222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 17:47:51.079627  222240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 17:47:51.087042  222240 kubeadm.go:392] StartCluster: {Name:embed-certs-698725 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-698725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:47:51.087200  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 17:47:51.087292  222240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 17:47:51.127972  222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:47:51.127998  222240 cri.go:89] found id: "a249b6a6bd06a920eea275ddf24e32bbdfb772be3581b64a0ec16ff624981de2"
	I0120 17:47:51.128004  222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:47:51.128008  222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:47:51.128011  222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:47:51.128016  222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:47:51.128019  222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:47:51.128022  222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:47:51.128026  222240 cri.go:89] found id: ""
	I0120 17:47:51.128079  222240 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 17:47:51.146118  222240 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T17:47:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 17:47:51.146232  222240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 17:47:51.156333  222240 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 17:47:51.156354  222240 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 17:47:51.156406  222240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 17:47:51.165984  222240 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 17:47:51.166618  222240 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-698725" does not appear in /home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:47:51.166901  222240 kubeconfig.go:62] /home/jenkins/minikube-integration/20109-2518/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-698725" cluster setting kubeconfig missing "embed-certs-698725" context setting]
	I0120 17:47:51.167474  222240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:47:51.168853  222240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 17:47:51.179263  222240 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0120 17:47:51.179310  222240 kubeadm.go:597] duration metric: took 22.949431ms to restartPrimaryControlPlane
	I0120 17:47:51.179320  222240 kubeadm.go:394] duration metric: took 92.289811ms to StartCluster
	I0120 17:47:51.179336  222240 settings.go:142] acquiring lock: {Name:mk1c7d255bd6ff729fb7f0cda8440d084eb0c286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:47:51.179502  222240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:47:51.180779  222240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/kubeconfig: {Name:mk7eb37afa68734d2ba48fcac1147e4fe5c87411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 17:47:51.180992  222240 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 17:47:51.181481  222240 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 17:47:51.181554  222240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-698725"
	I0120 17:47:51.181571  222240 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-698725"
	W0120 17:47:51.181585  222240 addons.go:247] addon storage-provisioner should already be in state true
	I0120 17:47:51.181609  222240 host.go:66] Checking if "embed-certs-698725" exists ...
	I0120 17:47:51.182100  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:51.182339  222240 config.go:182] Loaded profile config "embed-certs-698725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:47:51.182515  222240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-698725"
	I0120 17:47:51.182538  222240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-698725"
	I0120 17:47:51.182844  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:51.183120  222240 addons.go:69] Setting metrics-server=true in profile "embed-certs-698725"
	I0120 17:47:51.183182  222240 addons.go:238] Setting addon metrics-server=true in "embed-certs-698725"
	W0120 17:47:51.183203  222240 addons.go:247] addon metrics-server should already be in state true
	I0120 17:47:51.183258  222240 host.go:66] Checking if "embed-certs-698725" exists ...
	I0120 17:47:51.183879  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:51.187028  222240 addons.go:69] Setting dashboard=true in profile "embed-certs-698725"
	I0120 17:47:51.187072  222240 addons.go:238] Setting addon dashboard=true in "embed-certs-698725"
	W0120 17:47:51.187081  222240 addons.go:247] addon dashboard should already be in state true
	I0120 17:47:51.187118  222240 host.go:66] Checking if "embed-certs-698725" exists ...
	I0120 17:47:51.187653  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:51.191852  222240 out.go:177] * Verifying Kubernetes components...
	I0120 17:47:51.195323  222240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 17:47:51.248151  222240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 17:47:51.251160  222240 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:47:51.251182  222240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 17:47:51.251253  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:51.270235  222240 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 17:47:51.273400  222240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 17:47:51.277130  222240 addons.go:238] Setting addon default-storageclass=true in "embed-certs-698725"
	W0120 17:47:51.277153  222240 addons.go:247] addon default-storageclass should already be in state true
	I0120 17:47:51.277177  222240 host.go:66] Checking if "embed-certs-698725" exists ...
	I0120 17:47:51.277601  222240 cli_runner.go:164] Run: docker container inspect embed-certs-698725 --format={{.State.Status}}
	I0120 17:47:51.277815  222240 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 17:47:51.283449  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 17:47:51.283483  222240 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 17:47:51.283557  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:51.284063  222240 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 17:47:51.284078  222240 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 17:47:51.284134  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:51.309598  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:51.330575  222240 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 17:47:51.330595  222240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 17:47:51.330670  222240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-698725
	I0120 17:47:51.357154  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:51.365511  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:51.382832  222240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/embed-certs-698725/id_rsa Username:docker}
	I0120 17:47:51.409724  222240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 17:47:51.481912  222240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-698725" to be "Ready" ...
	I0120 17:47:51.605075  222240 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 17:47:51.605095  222240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 17:47:51.649299  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 17:47:51.649365  222240 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 17:47:51.669197  222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 17:47:51.692726  222240 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 17:47:51.692825  222240 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 17:47:51.802585  222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 17:47:51.807299  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 17:47:51.807437  222240 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 17:47:51.828979  222240 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 17:47:51.829085  222240 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 17:47:51.864022  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 17:47:51.864125  222240 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 17:47:51.957526  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 17:47:51.957604  222240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 17:47:52.069854  222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 17:47:52.259830  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 17:47:52.259899  222240 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 17:47:52.495822  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 17:47:52.495914  222240 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 17:47:52.610042  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 17:47:52.610124  222240 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 17:47:52.648240  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 17:47:52.648417  222240 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 17:47:52.695144  222240 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:47:52.695219  222240 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 17:47:52.734305  222240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 17:47:51.220240  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:53.731463  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:56.933993  222240 node_ready.go:49] node "embed-certs-698725" has status "Ready":"True"
	I0120 17:47:56.934030  222240 node_ready.go:38] duration metric: took 5.452072442s for node "embed-certs-698725" to be "Ready" ...
	I0120 17:47:56.934042  222240 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 17:47:56.962943  222240 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hpgxx" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.010755  222240 pod_ready.go:93] pod "coredns-668d6bf9bc-hpgxx" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:57.010793  222240 pod_ready.go:82] duration metric: took 47.81453ms for pod "coredns-668d6bf9bc-hpgxx" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.010806  222240 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.030415  222240 pod_ready.go:93] pod "etcd-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:57.030446  222240 pod_ready.go:82] duration metric: took 19.631401ms for pod "etcd-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.030463  222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.059418  222240 pod_ready.go:93] pod "kube-apiserver-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:57.059485  222240 pod_ready.go:82] duration metric: took 29.013139ms for pod "kube-apiserver-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.059512  222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.081254  222240 pod_ready.go:93] pod "kube-controller-manager-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:57.081331  222240 pod_ready.go:82] duration metric: took 21.787776ms for pod "kube-controller-manager-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.081359  222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cxzfl" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.156414  222240 pod_ready.go:93] pod "kube-proxy-cxzfl" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:57.156479  222240 pod_ready.go:82] duration metric: took 75.100014ms for pod "kube-proxy-cxzfl" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.156506  222240 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.538976  222240 pod_ready.go:93] pod "kube-scheduler-embed-certs-698725" in "kube-system" namespace has status "Ready":"True"
	I0120 17:47:57.539052  222240 pod_ready.go:82] duration metric: took 382.524773ms for pod "kube-scheduler-embed-certs-698725" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:57.539078  222240 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace to be "Ready" ...
	I0120 17:47:59.546251  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:00.172426  222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.369737788s)
	I0120 17:48:00.172972  222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.103017874s)
	I0120 17:48:00.173045  222240 addons.go:479] Verifying addon metrics-server=true in "embed-certs-698725"
	I0120 17:48:00.173853  222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.504567299s)
	I0120 17:48:00.284550  222240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.550136834s)
	I0120 17:48:00.288168  222240 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-698725 addons enable metrics-server
	
	I0120 17:48:00.382565  222240 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0120 17:47:56.219975  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:47:58.715969  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:00.716172  216535 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:00.385911  222240 addons.go:514] duration metric: took 9.20441952s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0120 17:48:02.047336  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:01.717465  216535 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:48:01.717491  216535 pod_ready.go:82] duration metric: took 17.007921004s for pod "kube-controller-manager-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.717503  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.723321  216535 pod_ready.go:93] pod "kube-proxy-mxqgj" in "kube-system" namespace has status "Ready":"True"
	I0120 17:48:01.723396  216535 pod_ready.go:82] duration metric: took 5.87229ms for pod "kube-proxy-mxqgj" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.723409  216535 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.729329  216535 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace has status "Ready":"True"
	I0120 17:48:01.729356  216535 pod_ready.go:82] duration metric: took 5.938522ms for pod "kube-scheduler-old-k8s-version-145659" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:01.729367  216535 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
	I0120 17:48:03.811502  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:04.050574  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:06.059379  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:06.253893  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:08.739025  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:08.547983  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:11.048009  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:11.239058  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:13.736337  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:15.736465  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:13.549834  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:16.046201  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:18.247835  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:20.747201  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:18.545992  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:21.046806  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:23.242774  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:25.735545  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:23.545728  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:25.545844  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:27.736290  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:30.243746  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:28.045721  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:30.050780  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:32.545386  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:32.737127  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:34.737472  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:35.044878  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:37.045790  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:37.243570  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:39.245847  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:39.545372  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:41.545668  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:41.736938  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:44.242652  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:44.047439  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:46.546188  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:46.736378  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:49.243543  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:49.045049  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:51.047621  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:51.243642  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:53.244529  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:55.245129  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:53.545682  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:56.046309  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:57.736190  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:00.244816  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:48:58.546626  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:01.052472  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:02.245766  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:04.295578  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:03.545106  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:05.546016  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:06.736622  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:08.737036  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:10.737207  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:08.045654  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:10.045773  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:12.046443  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:13.242704  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:15.735684  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:14.545187  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:17.045842  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:17.737791  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:20.244523  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:19.046226  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:21.046493  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:22.244600  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:24.735659  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:23.546467  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:25.546509  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:26.736790  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:29.250223  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:28.046111  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:30.048239  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:32.544773  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:31.753850  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:34.236244  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:34.546808  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:37.045755  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:36.243241  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:38.736581  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:40.736828  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:39.544955  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:41.546930  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:43.238843  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:45.736169  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:44.049323  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:46.549194  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:47.736599  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:50.244905  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:49.045360  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:51.545331  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:52.737487  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:54.754561  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:54.045899  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:56.047862  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:57.236641  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:59.238929  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:49:58.548289  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:01.045428  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:01.241741  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:03.242427  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:05.736193  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:03.045861  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:05.046174  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:07.545820  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:07.736416  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:10.240839  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:09.548802  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:12.046389  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:12.244010  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:14.246547  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:14.545609  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:16.545885  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:16.737206  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:19.244440  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:18.546156  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:21.045616  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:21.244729  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:23.736600  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:23.545938  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:26.046091  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:26.244612  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:28.250474  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:30.739819  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:28.545649  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:30.545747  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:33.245363  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:35.737773  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:33.049985  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:35.058335  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:37.546305  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:37.742221  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:40.237488  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:40.047483  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:42.544937  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:42.738257  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:45.239382  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:44.545865  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:47.045906  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:47.736272  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:50.236202  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:49.046122  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:51.046239  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:52.239206  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:54.244758  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:53.545581  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:55.545829  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:56.736346  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:59.237672  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:50:58.046288  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:00.066802  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:02.545102  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:01.244367  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:03.736783  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:05.737354  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:04.545959  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:07.046059  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:08.235650  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:10.237001  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:09.049918  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:11.545323  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:12.237848  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:14.240863  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:14.045756  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:16.046288  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:16.243349  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:18.737611  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:18.046831  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:20.052058  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:22.546075  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:21.244639  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:23.735945  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:25.045178  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:27.046055  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:26.242287  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:28.735482  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:30.736321  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:29.545870  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:31.546237  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:32.736991  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:35.236754  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:34.046112  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:36.048791  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:37.244823  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:39.735311  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:38.546060  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:41.045351  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:41.735810  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:43.736169  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:45.742400  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:43.046135  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:45.047911  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:47.545521  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:48.243218  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:50.244231  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:49.545593  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:52.045901  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:52.244707  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:54.248009  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:54.545986  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:57.045628  222240 pod_ready.go:103] pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:57.545573  222240 pod_ready.go:82] duration metric: took 4m0.006469687s for pod "metrics-server-f79f97bbb-44zkt" in "kube-system" namespace to be "Ready" ...
	E0120 17:51:57.545600  222240 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 17:51:57.545610  222240 pod_ready.go:39] duration metric: took 4m0.611558284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 17:51:57.545626  222240 api_server.go:52] waiting for apiserver process to appear ...
	I0120 17:51:57.545656  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:51:57.545719  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:51:57.617009  222240 cri.go:89] found id: "05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
	I0120 17:51:57.617036  222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:51:57.617041  222240 cri.go:89] found id: ""
	I0120 17:51:57.617048  222240 logs.go:282] 2 containers: [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6]
	I0120 17:51:57.617122  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.620712  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.624582  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:51:57.624654  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:51:57.676345  222240 cri.go:89] found id: "39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
	I0120 17:51:57.676366  222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:51:57.676371  222240 cri.go:89] found id: ""
	I0120 17:51:57.676378  222240 logs.go:282] 2 containers: [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5]
	I0120 17:51:57.676439  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.680282  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.683581  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:51:57.683698  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:51:57.720584  222240 cri.go:89] found id: "76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
	I0120 17:51:57.720646  222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:51:57.720657  222240 cri.go:89] found id: ""
	I0120 17:51:57.720667  222240 logs.go:282] 2 containers: [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6]
	I0120 17:51:57.720731  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.730284  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.737537  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:51:57.737615  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:51:57.776517  222240 cri.go:89] found id: "a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
	I0120 17:51:57.776539  222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:51:57.776544  222240 cri.go:89] found id: ""
	I0120 17:51:57.776552  222240 logs.go:282] 2 containers: [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f]
	I0120 17:51:57.776606  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.779969  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.783102  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:51:57.783190  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:51:57.836806  222240 cri.go:89] found id: "b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
	I0120 17:51:57.836834  222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:51:57.836840  222240 cri.go:89] found id: ""
	I0120 17:51:57.836847  222240 logs.go:282] 2 containers: [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166]
	I0120 17:51:57.836904  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.840666  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.844319  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:51:57.844393  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:51:57.894058  222240 cri.go:89] found id: "28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
	I0120 17:51:57.894082  222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:51:57.894094  222240 cri.go:89] found id: ""
	I0120 17:51:57.894102  222240 logs.go:282] 2 containers: [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a]
	I0120 17:51:57.894165  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.898124  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.902319  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:51:57.902436  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:51:57.952198  222240 cri.go:89] found id: "f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
	I0120 17:51:57.952230  222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:51:57.952236  222240 cri.go:89] found id: ""
	I0120 17:51:57.952244  222240 logs.go:282] 2 containers: [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f]
	I0120 17:51:57.952316  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.956592  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:57.960234  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:51:57.960332  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:51:58.013379  222240 cri.go:89] found id: "d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
	I0120 17:51:58.013413  222240 cri.go:89] found id: ""
	I0120 17:51:58.013422  222240 logs.go:282] 1 containers: [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8]
	I0120 17:51:58.013521  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:56.737674  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:59.241838  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:51:58.017708  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:51:58.017785  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:51:58.066449  222240 cri.go:89] found id: "edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
	I0120 17:51:58.066475  222240 cri.go:89] found id: "68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
	I0120 17:51:58.066481  222240 cri.go:89] found id: ""
	I0120 17:51:58.066489  222240 logs.go:282] 2 containers: [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7]
	I0120 17:51:58.066548  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:58.070700  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:51:58.074690  222240 logs.go:123] Gathering logs for kube-scheduler [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5] ...
	I0120 17:51:58.074719  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
	I0120 17:51:58.126524  222240 logs.go:123] Gathering logs for kindnet [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0] ...
	I0120 17:51:58.126555  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
	I0120 17:51:58.173512  222240 logs.go:123] Gathering logs for storage-provisioner [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee] ...
	I0120 17:51:58.173542  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
	I0120 17:51:58.221076  222240 logs.go:123] Gathering logs for storage-provisioner [68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7] ...
	I0120 17:51:58.221108  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
	I0120 17:51:58.290668  222240 logs.go:123] Gathering logs for container status ...
	I0120 17:51:58.290697  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:51:58.348834  222240 logs.go:123] Gathering logs for etcd [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c] ...
	I0120 17:51:58.348866  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
	I0120 17:51:58.398407  222240 logs.go:123] Gathering logs for coredns [f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6] ...
	I0120 17:51:58.398440  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:51:58.439843  222240 logs.go:123] Gathering logs for kube-scheduler [2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f] ...
	I0120 17:51:58.439871  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:51:58.503321  222240 logs.go:123] Gathering logs for kube-controller-manager [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa] ...
	I0120 17:51:58.503389  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
	I0120 17:51:58.585533  222240 logs.go:123] Gathering logs for kindnet [03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f] ...
	I0120 17:51:58.585565  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:51:58.634511  222240 logs.go:123] Gathering logs for containerd ...
	I0120 17:51:58.634535  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:51:58.714218  222240 logs.go:123] Gathering logs for kubelet ...
	I0120 17:51:58.714256  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 17:51:58.806521  222240 logs.go:123] Gathering logs for etcd [21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5] ...
	I0120 17:51:58.806564  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:51:58.858968  222240 logs.go:123] Gathering logs for kube-proxy [a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166] ...
	I0120 17:51:58.859000  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:51:58.907920  222240 logs.go:123] Gathering logs for kubernetes-dashboard [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8] ...
	I0120 17:51:58.907954  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
	I0120 17:51:58.957809  222240 logs.go:123] Gathering logs for kube-apiserver [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2] ...
	I0120 17:51:58.957836  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
	I0120 17:51:59.014674  222240 logs.go:123] Gathering logs for kube-apiserver [c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6] ...
	I0120 17:51:59.014709  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:51:59.066428  222240 logs.go:123] Gathering logs for coredns [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37] ...
	I0120 17:51:59.066465  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
	I0120 17:51:59.113438  222240 logs.go:123] Gathering logs for kube-proxy [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f] ...
	I0120 17:51:59.113467  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
	I0120 17:51:59.153989  222240 logs.go:123] Gathering logs for kube-controller-manager [b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a] ...
	I0120 17:51:59.154018  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:51:59.221680  222240 logs.go:123] Gathering logs for dmesg ...
	I0120 17:51:59.221715  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:51:59.244909  222240 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:51:59.244938  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:01.987021  222240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 17:52:02.010001  222240 api_server.go:72] duration metric: took 4m10.828971389s to wait for apiserver process to appear ...
	I0120 17:52:02.010030  222240 api_server.go:88] waiting for apiserver healthz status ...
	I0120 17:52:02.010071  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:52:02.010138  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:52:02.093839  222240 cri.go:89] found id: "05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
	I0120 17:52:02.093863  222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:52:02.093868  222240 cri.go:89] found id: ""
	I0120 17:52:02.093875  222240 logs.go:282] 2 containers: [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6]
	I0120 17:52:02.093931  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.099297  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.103702  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:52:02.103787  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:52:02.165550  222240 cri.go:89] found id: "39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
	I0120 17:52:02.165573  222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:52:02.165579  222240 cri.go:89] found id: ""
	I0120 17:52:02.165586  222240 logs.go:282] 2 containers: [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5]
	I0120 17:52:02.165644  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.172628  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.177430  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:52:02.177507  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:52:02.250225  222240 cri.go:89] found id: "76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
	I0120 17:52:02.250250  222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:52:02.250255  222240 cri.go:89] found id: ""
	I0120 17:52:02.250262  222240 logs.go:282] 2 containers: [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6]
	I0120 17:52:02.250319  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.254841  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.259738  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:52:02.259813  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:52:02.318546  222240 cri.go:89] found id: "a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
	I0120 17:52:02.318566  222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:52:02.318572  222240 cri.go:89] found id: ""
	I0120 17:52:02.318579  222240 logs.go:282] 2 containers: [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f]
	I0120 17:52:02.318634  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.322902  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.327285  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:52:02.327378  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:52:02.392171  222240 cri.go:89] found id: "b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
	I0120 17:52:02.392192  222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:52:02.392196  222240 cri.go:89] found id: ""
	I0120 17:52:02.392204  222240 logs.go:282] 2 containers: [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166]
	I0120 17:52:02.392279  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.396733  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.400973  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:52:02.401059  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:52:02.467222  222240 cri.go:89] found id: "28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
	I0120 17:52:02.467243  222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:52:02.467248  222240 cri.go:89] found id: ""
	I0120 17:52:02.467255  222240 logs.go:282] 2 containers: [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a]
	I0120 17:52:02.467312  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.471371  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.475281  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:52:02.475502  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:52:02.525378  222240 cri.go:89] found id: "f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
	I0120 17:52:02.525398  222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:52:02.525404  222240 cri.go:89] found id: ""
	I0120 17:52:02.525411  222240 logs.go:282] 2 containers: [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f]
	I0120 17:52:02.525466  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.529520  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.534115  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:52:02.534191  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:52:02.585681  222240 cri.go:89] found id: "d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
	I0120 17:52:02.585706  222240 cri.go:89] found id: ""
	I0120 17:52:02.585714  222240 logs.go:282] 1 containers: [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8]
	I0120 17:52:02.585781  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.590016  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:52:02.590093  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:52:02.640880  222240 cri.go:89] found id: "edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
	I0120 17:52:02.640904  222240 cri.go:89] found id: "68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
	I0120 17:52:02.640909  222240 cri.go:89] found id: ""
	I0120 17:52:02.640916  222240 logs.go:282] 2 containers: [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7]
	I0120 17:52:02.640972  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.650887  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.654957  222240 logs.go:123] Gathering logs for kube-apiserver [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2] ...
	I0120 17:52:02.654998  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
	I0120 17:52:02.739262  222240 logs.go:123] Gathering logs for kube-controller-manager [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa] ...
	I0120 17:52:02.739296  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
	I0120 17:52:02.811604  222240 logs.go:123] Gathering logs for storage-provisioner [68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7] ...
	I0120 17:52:02.811641  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
	I0120 17:52:02.858689  222240 logs.go:123] Gathering logs for containerd ...
	I0120 17:52:02.858718  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:52:02.947820  222240 logs.go:123] Gathering logs for kubelet ...
	I0120 17:52:02.947861  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 17:52:01.244283  216535 pod_ready.go:103] pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace has status "Ready":"False"
	I0120 17:52:01.736777  216535 pod_ready.go:82] duration metric: took 4m0.007395127s for pod "metrics-server-9975d5f86-wxlv8" in "kube-system" namespace to be "Ready" ...
	E0120 17:52:01.736846  216535 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 17:52:01.736870  216535 pod_ready.go:39] duration metric: took 5m28.474374205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 17:52:01.736899  216535 api_server.go:52] waiting for apiserver process to appear ...
	I0120 17:52:01.736964  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:52:01.737053  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:52:01.781253  216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:01.781321  216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:01.781341  216535 cri.go:89] found id: ""
	I0120 17:52:01.781356  216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
	I0120 17:52:01.781432  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.785393  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.788792  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:52:01.788862  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:52:01.833834  216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:01.833869  216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:01.833902  216535 cri.go:89] found id: ""
	I0120 17:52:01.833910  216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
	I0120 17:52:01.833990  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.838990  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.843467  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:52:01.843556  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:52:01.886764  216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:01.886856  216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:01.886877  216535 cri.go:89] found id: ""
	I0120 17:52:01.886908  216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
	I0120 17:52:01.886983  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.891011  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.894775  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:52:01.894856  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:52:01.949896  216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:01.949920  216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:01.949925  216535 cri.go:89] found id: ""
	I0120 17:52:01.949933  216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
	I0120 17:52:01.949992  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.954296  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:01.958371  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:52:01.958506  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:52:02.018621  216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:02.018645  216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:02.018650  216535 cri.go:89] found id: ""
	I0120 17:52:02.018657  216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
	I0120 17:52:02.018714  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.023690  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.028696  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:52:02.028860  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:52:02.096051  216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:02.096073  216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:02.096078  216535 cri.go:89] found id: ""
	I0120 17:52:02.096085  216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
	I0120 17:52:02.096149  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.100993  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.106917  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:52:02.106990  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:52:02.174049  216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:02.174080  216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:02.174086  216535 cri.go:89] found id: ""
	I0120 17:52:02.174093  216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
	I0120 17:52:02.174145  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.179127  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.184826  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:52:02.184901  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:52:02.254018  216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:02.254041  216535 cri.go:89] found id: ""
	I0120 17:52:02.254049  216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
	I0120 17:52:02.254122  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.260217  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:52:02.260276  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:52:02.316256  216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:02.316280  216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:02.316286  216535 cri.go:89] found id: ""
	I0120 17:52:02.316293  216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
	I0120 17:52:02.316352  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.321766  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:02.327502  216535 logs.go:123] Gathering logs for dmesg ...
	I0120 17:52:02.327525  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:52:02.343747  216535 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:52:02.343778  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:02.674989  216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
	I0120 17:52:02.675019  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:02.739409  216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
	I0120 17:52:02.739429  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:02.805987  216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
	I0120 17:52:02.806072  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:02.862091  216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
	I0120 17:52:02.862117  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:02.952148  216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
	I0120 17:52:02.952223  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:03.020765  216535 logs.go:123] Gathering logs for container status ...
	I0120 17:52:03.020815  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:52:03.090382  216535 logs.go:123] Gathering logs for kubelet ...
	I0120 17:52:03.090580  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 17:52:03.161589  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.161853  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.165125  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.167727  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.167958  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.168311  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.168784  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644     662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
	W0120 17:52:03.169139  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.170224  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.172926  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.173303  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.173514  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.173865  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.174073  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.174688  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.175052  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.175261  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.175632  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.178118  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.178583  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.178770  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.179381  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.179564  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.179889  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.180070  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.180396  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.180579  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.180905  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.181086  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.181407  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.181587  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.181909  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.184453  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:03.184816  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.185031  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.185681  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.185896  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.186251  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.186463  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.186830  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.187051  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.187407  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.187689  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.188047  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.188255  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.188613  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.188828  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.189195  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.189403  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.189758  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.189969  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.190324  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:03.190536  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:03.190894  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:03.190919  216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
	I0120 17:52:03.190947  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:03.259910  216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
	I0120 17:52:03.259991  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:03.317942  216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
	I0120 17:52:03.318013  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:03.380525  216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
	I0120 17:52:03.380608  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:03.453396  216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
	I0120 17:52:03.453442  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:03.506945  216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
	I0120 17:52:03.506974  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:03.555548  216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
	I0120 17:52:03.555628  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:03.674894  216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
	I0120 17:52:03.674971  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:03.746584  216535 logs.go:123] Gathering logs for containerd ...
	I0120 17:52:03.746608  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:52:03.830076  216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
	I0120 17:52:03.830148  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:03.938308  216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
	I0120 17:52:03.938397  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:04.023242  216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
	I0120 17:52:04.023376  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:04.093186  216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
	I0120 17:52:04.093218  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:04.203549  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:04.203705  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 17:52:04.203798  216535 out.go:270] X Problems detected in kubelet:
	W0120 17:52:04.203843  216535 out.go:270]   Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:04.203889  216535 out.go:270]   Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:04.203925  216535 out.go:270]   Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:04.203955  216535 out.go:270]   Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:04.203988  216535 out.go:270]   Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:04.204019  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:04.204048  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:52:03.040765  222240 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:52:03.040864  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:03.208364  222240 logs.go:123] Gathering logs for kube-scheduler [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5] ...
	I0120 17:52:03.208447  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
	I0120 17:52:03.271218  222240 logs.go:123] Gathering logs for kindnet [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0] ...
	I0120 17:52:03.271250  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
	I0120 17:52:03.330849  222240 logs.go:123] Gathering logs for etcd [21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5] ...
	I0120 17:52:03.330882  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:52:03.402129  222240 logs.go:123] Gathering logs for coredns [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37] ...
	I0120 17:52:03.402164  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
	I0120 17:52:03.452417  222240 logs.go:123] Gathering logs for coredns [f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6] ...
	I0120 17:52:03.452448  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:52:03.507073  222240 logs.go:123] Gathering logs for kube-scheduler [2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f] ...
	I0120 17:52:03.507096  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:52:03.578981  222240 logs.go:123] Gathering logs for kube-proxy [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f] ...
	I0120 17:52:03.579015  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
	I0120 17:52:03.641337  222240 logs.go:123] Gathering logs for kindnet [03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f] ...
	I0120 17:52:03.641362  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:52:03.712323  222240 logs.go:123] Gathering logs for dmesg ...
	I0120 17:52:03.712353  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:52:03.728792  222240 logs.go:123] Gathering logs for kube-apiserver [c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6] ...
	I0120 17:52:03.728827  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:52:03.822833  222240 logs.go:123] Gathering logs for kubernetes-dashboard [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8] ...
	I0120 17:52:03.822869  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
	I0120 17:52:03.881083  222240 logs.go:123] Gathering logs for kube-controller-manager [b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a] ...
	I0120 17:52:03.881120  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:52:03.996717  222240 logs.go:123] Gathering logs for storage-provisioner [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee] ...
	I0120 17:52:03.996798  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
	I0120 17:52:04.052407  222240 logs.go:123] Gathering logs for container status ...
	I0120 17:52:04.052485  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:52:04.117749  222240 logs.go:123] Gathering logs for etcd [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c] ...
	I0120 17:52:04.117833  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
	I0120 17:52:04.177511  222240 logs.go:123] Gathering logs for kube-proxy [a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166] ...
	I0120 17:52:04.177544  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:52:06.729219  222240 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0120 17:52:06.738133  222240 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0120 17:52:06.739310  222240 api_server.go:141] control plane version: v1.32.0
	I0120 17:52:06.739337  222240 api_server.go:131] duration metric: took 4.729299032s to wait for apiserver health ...
	I0120 17:52:06.739387  222240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 17:52:06.739413  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:52:06.739473  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:52:06.780997  222240 cri.go:89] found id: "05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
	I0120 17:52:06.781019  222240 cri.go:89] found id: "c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:52:06.781025  222240 cri.go:89] found id: ""
	I0120 17:52:06.781032  222240 logs.go:282] 2 containers: [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6]
	I0120 17:52:06.781100  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.785102  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.789052  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:52:06.789149  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:52:06.828043  222240 cri.go:89] found id: "39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
	I0120 17:52:06.828066  222240 cri.go:89] found id: "21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:52:06.828071  222240 cri.go:89] found id: ""
	I0120 17:52:06.828079  222240 logs.go:282] 2 containers: [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5]
	I0120 17:52:06.828142  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.831797  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.835573  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:52:06.835722  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:52:06.876754  222240 cri.go:89] found id: "76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
	I0120 17:52:06.876778  222240 cri.go:89] found id: "f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:52:06.876783  222240 cri.go:89] found id: ""
	I0120 17:52:06.876790  222240 logs.go:282] 2 containers: [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6]
	I0120 17:52:06.876846  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.880582  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.884412  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:52:06.884525  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:52:06.928663  222240 cri.go:89] found id: "a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
	I0120 17:52:06.928728  222240 cri.go:89] found id: "2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:52:06.928746  222240 cri.go:89] found id: ""
	I0120 17:52:06.928768  222240 logs.go:282] 2 containers: [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f]
	I0120 17:52:06.928854  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.932910  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.937039  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:52:06.937164  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:52:06.985011  222240 cri.go:89] found id: "b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
	I0120 17:52:06.985083  222240 cri.go:89] found id: "a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:52:06.985101  222240 cri.go:89] found id: ""
	I0120 17:52:06.985123  222240 logs.go:282] 2 containers: [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166]
	I0120 17:52:06.985208  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.988821  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:06.992483  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:52:06.992560  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:52:07.035044  222240 cri.go:89] found id: "28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
	I0120 17:52:07.035115  222240 cri.go:89] found id: "b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:52:07.035146  222240 cri.go:89] found id: ""
	I0120 17:52:07.035170  222240 logs.go:282] 2 containers: [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a]
	I0120 17:52:07.035259  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.039075  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.042498  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:52:07.042570  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:52:07.079877  222240 cri.go:89] found id: "f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
	I0120 17:52:07.079951  222240 cri.go:89] found id: "03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:52:07.079970  222240 cri.go:89] found id: ""
	I0120 17:52:07.079984  222240 logs.go:282] 2 containers: [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f]
	I0120 17:52:07.080056  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.086332  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.092807  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:52:07.092925  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:52:07.138181  222240 cri.go:89] found id: "edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
	I0120 17:52:07.138204  222240 cri.go:89] found id: "68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
	I0120 17:52:07.138209  222240 cri.go:89] found id: ""
	I0120 17:52:07.138216  222240 logs.go:282] 2 containers: [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7]
	I0120 17:52:07.138278  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.142180  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.145487  222240 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:52:07.145581  222240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:52:07.181159  222240 cri.go:89] found id: "d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
	I0120 17:52:07.181182  222240 cri.go:89] found id: ""
	I0120 17:52:07.181189  222240 logs.go:282] 1 containers: [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8]
	I0120 17:52:07.181262  222240 ssh_runner.go:195] Run: which crictl
	I0120 17:52:07.185182  222240 logs.go:123] Gathering logs for etcd [39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c] ...
	I0120 17:52:07.185209  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b0d86194d81004959b58d73150c0d01976e360407ff35630cfa513879cbf5c"
	I0120 17:52:07.228879  222240 logs.go:123] Gathering logs for coredns [76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37] ...
	I0120 17:52:07.228910  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76749734b0382e9e1dad6d2ab3dc9ef4c9f18ec9002b3bdcac9bac7c08c57e37"
	I0120 17:52:07.275233  222240 logs.go:123] Gathering logs for coredns [f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6] ...
	I0120 17:52:07.275277  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4cad3e16146fa364c6312d364d531c112087278646a7ba2b9471813e236efe6"
	I0120 17:52:07.317238  222240 logs.go:123] Gathering logs for kube-proxy [a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166] ...
	I0120 17:52:07.317274  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9e2684170d273a6b206e843bb0f7b74387f086c5615424eac87717fa3f29166"
	I0120 17:52:07.356737  222240 logs.go:123] Gathering logs for kube-controller-manager [b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a] ...
	I0120 17:52:07.356763  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3a3c76cfd69bd52343187e75cc1d86d022293b08ddfc09f796e6c59a1467b4a"
	I0120 17:52:07.432622  222240 logs.go:123] Gathering logs for kindnet [03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f] ...
	I0120 17:52:07.432657  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03ece87267fb41c4fd92f1fa9538da56358af5a44a9694be940f3d4aa781247f"
	I0120 17:52:07.485007  222240 logs.go:123] Gathering logs for storage-provisioner [68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7] ...
	I0120 17:52:07.485035  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68498da44ebb8e0f3a0fccdd766d67588e9269eaa6a69afbaf5ec0b2642ba1f7"
	I0120 17:52:07.530703  222240 logs.go:123] Gathering logs for kube-apiserver [c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6] ...
	I0120 17:52:07.530738  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6288b0c524d2b13689dc5e4f1348f46d9c634b8fd93643dd08500b2452c39f6"
	I0120 17:52:07.604556  222240 logs.go:123] Gathering logs for containerd ...
	I0120 17:52:07.604592  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:52:07.676302  222240 logs.go:123] Gathering logs for kubernetes-dashboard [d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8] ...
	I0120 17:52:07.676345  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0170d90e5edaf4f11e58bc6d849ef13a1c52fa33aadd15a3f73d9c05abf88c8"
	I0120 17:52:07.724747  222240 logs.go:123] Gathering logs for kube-scheduler [a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5] ...
	I0120 17:52:07.724775  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a698a553a4369cbe45d9011f095744be626946f0e637168406dc1af66d26cce5"
	I0120 17:52:07.764536  222240 logs.go:123] Gathering logs for kube-scheduler [2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f] ...
	I0120 17:52:07.764564  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bc4245f956b9152343fb1dc31a73df9bf60908c290e60a314b896693cf4482f"
	I0120 17:52:07.821815  222240 logs.go:123] Gathering logs for kube-controller-manager [28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa] ...
	I0120 17:52:07.821850  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28b0e32e4daf58d60c20cfbc9cc483894053f7ca883f24fb98e6a4b32b44fcaa"
	I0120 17:52:07.903863  222240 logs.go:123] Gathering logs for kindnet [f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0] ...
	I0120 17:52:07.903898  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f116a7468c02aa9cbd22346a571979d10bb6b98e05633554b125fda5063009d0"
	I0120 17:52:07.953613  222240 logs.go:123] Gathering logs for etcd [21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5] ...
	I0120 17:52:07.953642  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a5c6cab4dc7783c2458f363cdf5a6ae9545f91a7afd21a1cac0f3f8c662ae5"
	I0120 17:52:08.011222  222240 logs.go:123] Gathering logs for kube-apiserver [05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2] ...
	I0120 17:52:08.011260  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05b81e54ba662518c386660143cd24973bf94f614533da225eb481daf10bb6a2"
	I0120 17:52:08.081563  222240 logs.go:123] Gathering logs for kube-proxy [b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f] ...
	I0120 17:52:08.081596  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b841dbc664f6671cbc74af0f220a0313ce7c73e4ea5790040d9d48d63d2b496f"
	I0120 17:52:08.126297  222240 logs.go:123] Gathering logs for storage-provisioner [edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee] ...
	I0120 17:52:08.126336  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd54069078656a006b9e3db6b570bcce19cc33959a3f45c5ae2bea24abff5ee"
	I0120 17:52:08.168888  222240 logs.go:123] Gathering logs for dmesg ...
	I0120 17:52:08.168917  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:52:08.192653  222240 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:52:08.192684  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:08.344570  222240 logs.go:123] Gathering logs for container status ...
	I0120 17:52:08.344601  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:52:08.390944  222240 logs.go:123] Gathering logs for kubelet ...
	I0120 17:52:08.390973  222240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 17:52:10.983596  222240 system_pods.go:59] 9 kube-system pods found
	I0120 17:52:10.983687  222240 system_pods.go:61] "coredns-668d6bf9bc-hpgxx" [aa92c9f5-e893-4de3-96b0-ca01664fffdb] Running
	I0120 17:52:10.983709  222240 system_pods.go:61] "etcd-embed-certs-698725" [04251eb0-9233-4252-a36f-cb9982b6cf58] Running
	I0120 17:52:10.983725  222240 system_pods.go:61] "kindnet-7bpzp" [5f1ef73a-3e79-4e00-ab0d-3fa04bafcf4d] Running
	I0120 17:52:10.983743  222240 system_pods.go:61] "kube-apiserver-embed-certs-698725" [1eff48d5-cec4-493a-9408-49a0db22ad25] Running
	I0120 17:52:10.983749  222240 system_pods.go:61] "kube-controller-manager-embed-certs-698725" [0d662fa3-2c7c-4a82-9de7-1a220a569b38] Running
	I0120 17:52:10.983762  222240 system_pods.go:61] "kube-proxy-cxzfl" [b77e79d8-c097-401e-a08c-b1338305f9eb] Running
	I0120 17:52:10.983777  222240 system_pods.go:61] "kube-scheduler-embed-certs-698725" [3639200c-a355-409a-9dbb-6298c975ff23] Running
	I0120 17:52:10.983786  222240 system_pods.go:61] "metrics-server-f79f97bbb-44zkt" [5d7d7a02-93d2-460c-9bf9-0716128b06d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 17:52:10.983800  222240 system_pods.go:61] "storage-provisioner" [94c17bca-4ede-44c1-a68c-c742181c749f] Running
	I0120 17:52:10.983808  222240 system_pods.go:74] duration metric: took 4.244414285s to wait for pod list to return data ...
	I0120 17:52:10.983819  222240 default_sa.go:34] waiting for default service account to be created ...
	I0120 17:52:10.987628  222240 default_sa.go:45] found service account: "default"
	I0120 17:52:10.987655  222240 default_sa.go:55] duration metric: took 3.830033ms for default service account to be created ...
	I0120 17:52:10.987664  222240 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 17:52:10.993748  222240 system_pods.go:87] 9 kube-system pods found
	I0120 17:52:10.997144  222240 system_pods.go:105] "coredns-668d6bf9bc-hpgxx" [aa92c9f5-e893-4de3-96b0-ca01664fffdb] Running
	I0120 17:52:10.997167  222240 system_pods.go:105] "etcd-embed-certs-698725" [04251eb0-9233-4252-a36f-cb9982b6cf58] Running
	I0120 17:52:10.997173  222240 system_pods.go:105] "kindnet-7bpzp" [5f1ef73a-3e79-4e00-ab0d-3fa04bafcf4d] Running
	I0120 17:52:10.997179  222240 system_pods.go:105] "kube-apiserver-embed-certs-698725" [1eff48d5-cec4-493a-9408-49a0db22ad25] Running
	I0120 17:52:10.997184  222240 system_pods.go:105] "kube-controller-manager-embed-certs-698725" [0d662fa3-2c7c-4a82-9de7-1a220a569b38] Running
	I0120 17:52:10.997190  222240 system_pods.go:105] "kube-proxy-cxzfl" [b77e79d8-c097-401e-a08c-b1338305f9eb] Running
	I0120 17:52:10.997195  222240 system_pods.go:105] "kube-scheduler-embed-certs-698725" [3639200c-a355-409a-9dbb-6298c975ff23] Running
	I0120 17:52:10.997204  222240 system_pods.go:105] "metrics-server-f79f97bbb-44zkt" [5d7d7a02-93d2-460c-9bf9-0716128b06d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 17:52:10.997210  222240 system_pods.go:105] "storage-provisioner" [94c17bca-4ede-44c1-a68c-c742181c749f] Running
	I0120 17:52:10.997219  222240 system_pods.go:147] duration metric: took 9.549462ms to wait for k8s-apps to be running ...
	I0120 17:52:10.997229  222240 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 17:52:10.997288  222240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 17:52:11.012272  222240 system_svc.go:56] duration metric: took 15.033469ms WaitForService to wait for kubelet
	I0120 17:52:11.012301  222240 kubeadm.go:582] duration metric: took 4m19.831275097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 17:52:11.012321  222240 node_conditions.go:102] verifying NodePressure condition ...
	I0120 17:52:11.015569  222240 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0120 17:52:11.015603  222240 node_conditions.go:123] node cpu capacity is 2
	I0120 17:52:11.015617  222240 node_conditions.go:105] duration metric: took 3.290494ms to run NodePressure ...
	I0120 17:52:11.015630  222240 start.go:241] waiting for startup goroutines ...
	I0120 17:52:11.015638  222240 start.go:246] waiting for cluster config update ...
	I0120 17:52:11.015649  222240 start.go:255] writing updated cluster config ...
	I0120 17:52:11.015968  222240 ssh_runner.go:195] Run: rm -f paused
	I0120 17:52:11.079702  222240 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 17:52:11.084877  222240 out.go:177] * Done! kubectl is now configured to use "embed-certs-698725" cluster and "default" namespace by default
	I0120 17:52:14.204540  216535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 17:52:14.216885  216535 api_server.go:72] duration metric: took 5m59.990640844s to wait for apiserver process to appear ...
	I0120 17:52:14.216913  216535 api_server.go:88] waiting for apiserver healthz status ...
	I0120 17:52:14.216952  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 17:52:14.217012  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 17:52:14.275816  216535 cri.go:89] found id: "f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:14.275838  216535 cri.go:89] found id: "a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:14.275843  216535 cri.go:89] found id: ""
	I0120 17:52:14.275850  216535 logs.go:282] 2 containers: [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e]
	I0120 17:52:14.275981  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.280911  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.284620  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 17:52:14.284694  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 17:52:14.324506  216535 cri.go:89] found id: "17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:14.324530  216535 cri.go:89] found id: "658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:14.324536  216535 cri.go:89] found id: ""
	I0120 17:52:14.324544  216535 logs.go:282] 2 containers: [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec]
	I0120 17:52:14.324602  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.328307  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.331742  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 17:52:14.331812  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 17:52:14.375892  216535 cri.go:89] found id: "583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:14.375913  216535 cri.go:89] found id: "c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:14.375919  216535 cri.go:89] found id: ""
	I0120 17:52:14.375926  216535 logs.go:282] 2 containers: [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc]
	I0120 17:52:14.376011  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.379798  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.383248  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 17:52:14.383317  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 17:52:14.431319  216535 cri.go:89] found id: "2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:14.431376  216535 cri.go:89] found id: "9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:14.431382  216535 cri.go:89] found id: ""
	I0120 17:52:14.431388  216535 logs.go:282] 2 containers: [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90]
	I0120 17:52:14.431444  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.435015  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.438536  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 17:52:14.438604  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 17:52:14.483659  216535 cri.go:89] found id: "dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:14.483691  216535 cri.go:89] found id: "6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:14.483697  216535 cri.go:89] found id: ""
	I0120 17:52:14.483703  216535 logs.go:282] 2 containers: [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42]
	I0120 17:52:14.483778  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.487550  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.491261  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 17:52:14.491399  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 17:52:14.537554  216535 cri.go:89] found id: "c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:14.537574  216535 cri.go:89] found id: "6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:14.537580  216535 cri.go:89] found id: ""
	I0120 17:52:14.537587  216535 logs.go:282] 2 containers: [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f]
	I0120 17:52:14.537645  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.541369  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.544958  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 17:52:14.545047  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 17:52:14.582569  216535 cri.go:89] found id: "6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:14.582592  216535 cri.go:89] found id: "c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:14.582598  216535 cri.go:89] found id: ""
	I0120 17:52:14.582605  216535 logs.go:282] 2 containers: [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f]
	I0120 17:52:14.582683  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.586500  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.590053  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 17:52:14.590126  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 17:52:14.663263  216535 cri.go:89] found id: "027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:14.663283  216535 cri.go:89] found id: "91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:14.663289  216535 cri.go:89] found id: ""
	I0120 17:52:14.663296  216535 logs.go:282] 2 containers: [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd]
	I0120 17:52:14.663372  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.666867  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.672075  216535 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 17:52:14.672174  216535 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 17:52:14.720019  216535 cri.go:89] found id: "9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:14.720042  216535 cri.go:89] found id: ""
	I0120 17:52:14.720054  216535 logs.go:282] 1 containers: [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8]
	I0120 17:52:14.720116  216535 ssh_runner.go:195] Run: which crictl
	I0120 17:52:14.723774  216535 logs.go:123] Gathering logs for kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] ...
	I0120 17:52:14.723800  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478"
	I0120 17:52:14.773380  216535 logs.go:123] Gathering logs for storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] ...
	I0120 17:52:14.773417  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2"
	I0120 17:52:14.816814  216535 logs.go:123] Gathering logs for kubelet ...
	I0120 17:52:14.816842  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 17:52:14.876608  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:34 old-k8s-version-145659 kubelet[662]: E0120 17:46:34.880251     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.876839  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:35 old-k8s-version-145659 kubelet[662]: E0120 17:46:35.605048     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.879700  216535 logs.go:138] Found kubelet problem: Jan 20 17:46:50 old-k8s-version-145659 kubelet[662]: E0120 17:46:50.413085     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.883739  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:03 old-k8s-version-145659 kubelet[662]: E0120 17:47:03.698813     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.883950  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.404037     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.884282  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:04 old-k8s-version-145659 kubelet[662]: E0120 17:47:04.706245     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.884720  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.711644     662 pod_workers.go:191] Error syncing pod ceb78d8f-604f-44e7-a643-6a7788c747ae ("storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ceb78d8f-604f-44e7-a643-6a7788c747ae)"
	W0120 17:52:14.885047  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:05 old-k8s-version-145659 kubelet[662]: E0120 17:47:05.712757     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.886100  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:18 old-k8s-version-145659 kubelet[662]: E0120 17:47:18.760650     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.888645  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:19 old-k8s-version-145659 kubelet[662]: E0120 17:47:19.413053     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.889002  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:23 old-k8s-version-145659 kubelet[662]: E0120 17:47:23.877153     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.889194  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:31 old-k8s-version-145659 kubelet[662]: E0120 17:47:31.403908     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.889559  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:36 old-k8s-version-145659 kubelet[662]: E0120 17:47:36.403402     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.889746  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:46 old-k8s-version-145659 kubelet[662]: E0120 17:47:46.412253     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.890333  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:48 old-k8s-version-145659 kubelet[662]: E0120 17:47:48.845203     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.890660  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:53 old-k8s-version-145659 kubelet[662]: E0120 17:47:53.876712     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.890848  216535 logs.go:138] Found kubelet problem: Jan 20 17:47:58 old-k8s-version-145659 kubelet[662]: E0120 17:47:58.411076     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.891179  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:06 old-k8s-version-145659 kubelet[662]: E0120 17:48:06.403375     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.893674  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:12 old-k8s-version-145659 kubelet[662]: E0120 17:48:12.422259     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.894035  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:21 old-k8s-version-145659 kubelet[662]: E0120 17:48:21.403254     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.894400  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:25 old-k8s-version-145659 kubelet[662]: E0120 17:48:25.404070     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.895006  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:34 old-k8s-version-145659 kubelet[662]: E0120 17:48:34.988709     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.895192  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:39 old-k8s-version-145659 kubelet[662]: E0120 17:48:39.403769     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.895564  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:43 old-k8s-version-145659 kubelet[662]: E0120 17:48:43.877519     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.895751  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:53 old-k8s-version-145659 kubelet[662]: E0120 17:48:53.403792     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.896077  216535 logs.go:138] Found kubelet problem: Jan 20 17:48:58 old-k8s-version-145659 kubelet[662]: E0120 17:48:58.408685     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.896260  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:06 old-k8s-version-145659 kubelet[662]: E0120 17:49:06.403734     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.896584  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:11 old-k8s-version-145659 kubelet[662]: E0120 17:49:11.403959     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.896768  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:18 old-k8s-version-145659 kubelet[662]: E0120 17:49:18.408125     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.897094  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:24 old-k8s-version-145659 kubelet[662]: E0120 17:49:24.407972     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.897306  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:30 old-k8s-version-145659 kubelet[662]: E0120 17:49:30.404331     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.897633  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:37 old-k8s-version-145659 kubelet[662]: E0120 17:49:37.403265     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.900069  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:45 old-k8s-version-145659 kubelet[662]: E0120 17:49:45.414508     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0120 17:52:14.900399  216535 logs.go:138] Found kubelet problem: Jan 20 17:49:48 old-k8s-version-145659 kubelet[662]: E0120 17:49:48.403936     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.900588  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:00 old-k8s-version-145659 kubelet[662]: E0120 17:50:00.404116     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.901175  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:04 old-k8s-version-145659 kubelet[662]: E0120 17:50:04.268511     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.901358  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:12 old-k8s-version-145659 kubelet[662]: E0120 17:50:12.407685     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.901683  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:13 old-k8s-version-145659 kubelet[662]: E0120 17:50:13.876917     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.901866  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:25 old-k8s-version-145659 kubelet[662]: E0120 17:50:25.403750     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.902191  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:28 old-k8s-version-145659 kubelet[662]: E0120 17:50:28.405640     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.902379  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:40 old-k8s-version-145659 kubelet[662]: E0120 17:50:40.403822     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.902706  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.902892  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.903219  216535 logs.go:138] Found kubelet problem: Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.903413  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.903739  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.903923  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.904249  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.904433  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.904758  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.904944  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.905272  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.905457  216535 logs.go:138] Found kubelet problem: Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.905785  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:14.905970  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:14.906299  216535 logs.go:138] Found kubelet problem: Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:14.906310  216535 logs.go:123] Gathering logs for kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] ...
	I0120 17:52:14.906325  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad"
	I0120 17:52:14.972580  216535 logs.go:123] Gathering logs for coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] ...
	I0120 17:52:14.972618  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647"
	I0120 17:52:15.024121  216535 logs.go:123] Gathering logs for containerd ...
	I0120 17:52:15.024165  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 17:52:15.100734  216535 logs.go:123] Gathering logs for describe nodes ...
	I0120 17:52:15.100774  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 17:52:15.284993  216535 logs.go:123] Gathering logs for coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] ...
	I0120 17:52:15.285026  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc"
	I0120 17:52:15.335235  216535 logs.go:123] Gathering logs for kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] ...
	I0120 17:52:15.335264  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f"
	I0120 17:52:15.374772  216535 logs.go:123] Gathering logs for storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] ...
	I0120 17:52:15.374806  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd"
	I0120 17:52:15.433634  216535 logs.go:123] Gathering logs for container status ...
	I0120 17:52:15.433663  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 17:52:15.488059  216535 logs.go:123] Gathering logs for etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] ...
	I0120 17:52:15.488091  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192"
	I0120 17:52:15.542254  216535 logs.go:123] Gathering logs for kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] ...
	I0120 17:52:15.542284  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040"
	I0120 17:52:15.582486  216535 logs.go:123] Gathering logs for kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] ...
	I0120 17:52:15.582513  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f"
	I0120 17:52:15.660944  216535 logs.go:123] Gathering logs for kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] ...
	I0120 17:52:15.661023  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90"
	I0120 17:52:15.709672  216535 logs.go:123] Gathering logs for kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] ...
	I0120 17:52:15.709763  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e"
	I0120 17:52:15.755613  216535 logs.go:123] Gathering logs for kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] ...
	I0120 17:52:15.755647  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42"
	I0120 17:52:15.794100  216535 logs.go:123] Gathering logs for kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] ...
	I0120 17:52:15.794126  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c"
	I0120 17:52:15.876898  216535 logs.go:123] Gathering logs for kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] ...
	I0120 17:52:15.876935  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8"
	I0120 17:52:15.937814  216535 logs.go:123] Gathering logs for dmesg ...
	I0120 17:52:15.937842  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 17:52:15.955450  216535 logs.go:123] Gathering logs for kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] ...
	I0120 17:52:15.955481  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e"
	I0120 17:52:16.047655  216535 logs.go:123] Gathering logs for etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] ...
	I0120 17:52:16.047691  216535 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec"
	I0120 17:52:16.094113  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:16.094145  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 17:52:16.094250  216535 out.go:270] X Problems detected in kubelet:
	W0120 17:52:16.094269  216535 out.go:270]   Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:16.094283  216535 out.go:270]   Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:16.094294  216535 out.go:270]   Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	W0120 17:52:16.094301  216535 out.go:270]   Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 17:52:16.094307  216535 out.go:270]   Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	I0120 17:52:16.094313  216535 out.go:358] Setting ErrFile to fd 2...
	I0120 17:52:16.094320  216535 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:52:26.095908  216535 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0120 17:52:26.165226  216535 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0120 17:52:26.168436  216535 out.go:201] 
	W0120 17:52:26.171235  216535 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0120 17:52:26.171279  216535 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0120 17:52:26.171300  216535 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0120 17:52:26.171306  216535 out.go:270] * 
	W0120 17:52:26.172503  216535 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 17:52:26.175703  216535 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	7188f8c06a3b3       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   c8fc9fd032271       dashboard-metrics-scraper-8d5bb5db8-cl8l4
	027296a495300       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   c99266b0f867a       storage-provisioner
	9d777334c1d3a       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   1bc38e3d090e3       kubernetes-dashboard-cd95d586-httgs
	442a35203c65d       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   76333aa0b4be7       busybox
	91b967b2a1923       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   c99266b0f867a       storage-provisioner
	6471c303e0b43       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   547911b4053be       kindnet-lqrj9
	583937fe82126       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   78fc3bab14b29       coredns-74ff55c5b-gtjp2
	dcaa2ffccfffd       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   e8b9756b5dba1       kube-proxy-mxqgj
	2c63e2dabdc91       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   6eb729587ce65       kube-scheduler-old-k8s-version-145659
	17f42bfa9e9d5       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   2c11c7fde733f       etcd-old-k8s-version-145659
	c5b412b8a50ed       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   1d45273f8403b       kube-controller-manager-old-k8s-version-145659
	f8793ba82cf05       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   44abba68b72db       kube-apiserver-old-k8s-version-145659
	f8e366ebf8ddf       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   3ad30ae7d131f       busybox
	c290ff766fe32       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   72f070c27b32a       coredns-74ff55c5b-gtjp2
	c56a4a523a5ef       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   ba9b4fddad7e6       kindnet-lqrj9
	6be91b040ecf6       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   2d7942fce4b20       kube-proxy-mxqgj
	9e7debe8caa85       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   4d603e4e91de4       kube-scheduler-old-k8s-version-145659
	a3c342d8958e0       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   6ee593deec223       kube-apiserver-old-k8s-version-145659
	6be5e3acc8c3c       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   a194d95ea6f50       kube-controller-manager-old-k8s-version-145659
	658c4e5a0b63e       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   3b7de13a4f79d       etcd-old-k8s-version-145659
	
	
	==> containerd <==
	Jan 20 17:48:12 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:12.421777380Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.414102638Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.435965181Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\""
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.436861349Z" level=info msg="StartContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\""
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.511720980Z" level=info msg="StartContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" returns successfully"
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.511883245Z" level=info msg="received exit event container_id:\"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" id:\"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" pid:3071 exit_status:255 exited_at:{seconds:1737395314 nanos:508816228}"
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.544159399Z" level=info msg="shim disconnected" id=4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37 namespace=k8s.io
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.544222128Z" level=warning msg="cleaning up after shim disconnected" id=4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37 namespace=k8s.io
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.544231564Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.990379726Z" level=info msg="RemoveContainer for \"3520634dcc87f6efc329f938ca9ec9a853f8815395f1f46fa9549f1b259dee86\""
	Jan 20 17:48:34 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:48:34.998061739Z" level=info msg="RemoveContainer for \"3520634dcc87f6efc329f938ca9ec9a853f8815395f1f46fa9549f1b259dee86\" returns successfully"
	Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.405350188Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.411210393Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.413351285Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 20 17:49:45 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:49:45.413389324Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.405411386Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.424497647Z" level=info msg="CreateContainer within sandbox \"c8fc9fd0322711bf7409db7fc055c6d3a6c4056ca87d9ffcc5a0da3be450a7f8\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\""
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.425311739Z" level=info msg="StartContainer for \"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\""
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.507934237Z" level=info msg="StartContainer for \"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\" returns successfully"
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.509764843Z" level=info msg="received exit event container_id:\"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\" id:\"7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab\" pid:3309 exit_status:255 exited_at:{seconds:1737395403 nanos:509483578}"
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.532607815Z" level=info msg="shim disconnected" id=7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab namespace=k8s.io
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.532668944Z" level=warning msg="cleaning up after shim disconnected" id=7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab namespace=k8s.io
	Jan 20 17:50:03 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:03.532679372Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 17:50:04 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:04.270019870Z" level=info msg="RemoveContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\""
	Jan 20 17:50:04 old-k8s-version-145659 containerd[566]: time="2025-01-20T17:50:04.276782651Z" level=info msg="RemoveContainer for \"4110097c1b1c0a6617bcf0ea2db994480f40720ff7cececd5db52558095d7b37\" returns successfully"
	
	
	==> coredns [583937fe8212617f297f89ebbc5b58a073807978d2713d775599c78bac96a647] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54633 - 34674 "HINFO IN 1620636510534185632.5118592647921316763. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026162743s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0120 17:47:05.190043       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 17:46:35.18949319 +0000 UTC m=+0.083059828) (total time: 30.000451714s):
	Trace[2019727887]: [30.000451714s] [30.000451714s] END
	E0120 17:47:05.190075       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0120 17:47:05.201106       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 17:46:35.200720128 +0000 UTC m=+0.094286758) (total time: 30.000358028s):
	Trace[939984059]: [30.000358028s] [30.000358028s] END
	E0120 17:47:05.201130       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0120 17:47:05.201207       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 17:46:35.201021906 +0000 UTC m=+0.094588536) (total time: 30.000175176s):
	Trace[1474941318]: [30.000175176s] [30.000175176s] END
	E0120 17:47:05.201217       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [c290ff766fe3257ec25b9896daa0a860b02713ac931113f0589cd04f6c7695dc] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34633 - 7735 "HINFO IN 8473092561579625720.1642975792137428739. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069892168s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-145659
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-145659
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
	                    minikube.k8s.io/name=old-k8s-version-145659
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T17_43_49_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 17:43:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-145659
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 17:52:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 17:52:26 +0000   Mon, 20 Jan 2025 17:43:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 17:52:26 +0000   Mon, 20 Jan 2025 17:43:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 17:52:26 +0000   Mon, 20 Jan 2025 17:43:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 17:52:26 +0000   Mon, 20 Jan 2025 17:44:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-145659
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 2288b08bfb774fab9c2db8bb9a3f2e51
	  System UUID:                fd62fb04-58fa-4af2-9e2d-f153fa752255
	  Boot ID:                    39eacc08-2a64-468f-9148-fca198b76ea1
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 coredns-74ff55c5b-gtjp2                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m24s
	  kube-system                 etcd-old-k8s-version-145659                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m31s
	  kube-system                 kindnet-lqrj9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m24s
	  kube-system                 kube-apiserver-old-k8s-version-145659             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-old-k8s-version-145659    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-mxqgj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-old-k8s-version-145659             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 metrics-server-9975d5f86-wxlv8                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m35s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-cl8l4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-httgs               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-145659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s (x5 over 8m51s)  kubelet     Node old-k8s-version-145659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-145659 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m31s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m31s                  kubelet     Node old-k8s-version-145659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s                  kubelet     Node old-k8s-version-145659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s                  kubelet     Node old-k8s-version-145659 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m24s                  kubelet     Node old-k8s-version-145659 status is now: NodeReady
	  Normal  Starting                 8m22s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m6s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)    kubelet     Node old-k8s-version-145659 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)    kubelet     Node old-k8s-version-145659 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)    kubelet     Node old-k8s-version-145659 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m53s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan20 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014724] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.513857] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029310] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.772508] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.261433] kauditd_printk_skb: 36 callbacks suppressed
	[Jan20 17:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [17f42bfa9e9d5688ca9a1a781471b928a2bc897143f2eb1ea300cdfe1a97a192] <==
	2025-01-20 17:48:25.120706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:48:35.120566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:48:45.121152 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:48:55.120696 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:49:05.120775 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:49:15.121566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:49:25.120919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:49:35.120743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:49:45.120973 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:49:55.120626 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:50:05.120760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:50:15.120618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:50:25.120732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:50:35.120740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:50:45.123185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:50:55.120862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:51:05.120833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:51:15.120913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:51:25.120593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:51:35.120634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:51:45.120959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:51:55.120773 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:52:05.120967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:52:15.120952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:52:25.120621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [658c4e5a0b63ed783ab05b22512ead4932b817b501a2f86c6fa42adbb2ed50ec] <==
	raft2025/01/20 17:43:38 INFO: ea7e25599daad906 became candidate at term 2
	raft2025/01/20 17:43:38 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2025/01/20 17:43:38 INFO: ea7e25599daad906 became leader at term 2
	raft2025/01/20 17:43:38 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-01-20 17:43:38.554146 I | etcdserver: setting up the initial cluster version to 3.4
	2025-01-20 17:43:38.555242 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-01-20 17:43:38.555433 I | etcdserver/api: enabled capabilities for version 3.4
	2025-01-20 17:43:38.555507 I | etcdserver: published {Name:old-k8s-version-145659 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-01-20 17:43:38.555532 I | embed: ready to serve client requests
	2025-01-20 17:43:38.560722 I | embed: serving client requests on 127.0.0.1:2379
	2025-01-20 17:43:38.573464 I | embed: ready to serve client requests
	2025-01-20 17:43:38.575247 I | embed: serving client requests on 192.168.76.2:2379
	2025-01-20 17:43:58.449177 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:43:59.406240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:44:09.406301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:44:19.406306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:44:29.406428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:44:39.406640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:44:49.406549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:44:59.406475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:45:09.406591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:45:19.406485 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:45:29.406436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:45:39.406425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 17:45:49.406346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 17:52:29 up  1:34,  0 users,  load average: 1.63, 1.77, 2.17
	Linux old-k8s-version-145659 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6471c303e0b434953c7470980f35a010f769f921ed4e3ad99cbcbe5698a3d478] <==
	I0120 17:50:25.833249       1 main.go:301] handling current node
	I0120 17:50:35.824932       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:50:35.824965       1 main.go:301] handling current node
	I0120 17:50:45.824564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:50:45.824602       1 main.go:301] handling current node
	I0120 17:50:55.833332       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:50:55.833365       1 main.go:301] handling current node
	I0120 17:51:05.824500       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:51:05.824633       1 main.go:301] handling current node
	I0120 17:51:15.827875       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:51:15.827911       1 main.go:301] handling current node
	I0120 17:51:25.833825       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:51:25.833864       1 main.go:301] handling current node
	I0120 17:51:35.824196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:51:35.824286       1 main.go:301] handling current node
	I0120 17:51:45.833048       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:51:45.833082       1 main.go:301] handling current node
	I0120 17:51:55.831597       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:51:55.831632       1 main.go:301] handling current node
	I0120 17:52:05.831428       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:52:05.831678       1 main.go:301] handling current node
	I0120 17:52:15.832888       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:52:15.832921       1 main.go:301] handling current node
	I0120 17:52:25.833767       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:52:25.833806       1 main.go:301] handling current node
	
	
	==> kindnet [c56a4a523a5ef6e5a88565ed09d6f0c7433c6a44c5b22f390b9665e441d2e58f] <==
	I0120 17:44:07.536826       1 controller.go:365] Waiting for informer caches to sync
	I0120 17:44:07.536833       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0120 17:44:07.723505       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0120 17:44:07.723545       1 metrics.go:61] Registering metrics
	I0120 17:44:07.723617       1 controller.go:401] Syncing nftables rules
	I0120 17:44:17.544127       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:44:17.544187       1 main.go:301] handling current node
	I0120 17:44:27.536583       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:44:27.536617       1 main.go:301] handling current node
	I0120 17:44:37.536660       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:44:37.536723       1 main.go:301] handling current node
	I0120 17:44:47.544711       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:44:47.544746       1 main.go:301] handling current node
	I0120 17:44:57.543782       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:44:57.543815       1 main.go:301] handling current node
	I0120 17:45:07.537343       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:45:07.537376       1 main.go:301] handling current node
	I0120 17:45:17.544285       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:45:17.544318       1 main.go:301] handling current node
	I0120 17:45:27.543267       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:45:27.543300       1 main.go:301] handling current node
	I0120 17:45:37.536990       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:45:37.537020       1 main.go:301] handling current node
	I0120 17:45:47.540337       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0120 17:45:47.540398       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a3c342d8958e0b05dfa0565dd85dc5eff900d7addd9767fa280e08c24cb53a7e] <==
	I0120 17:43:46.444805       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0120 17:43:46.444856       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0120 17:43:46.460114       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0120 17:43:46.468180       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0120 17:43:46.468201       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0120 17:43:46.963006       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 17:43:47.023197       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0120 17:43:47.149965       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0120 17:43:47.151201       1 controller.go:606] quota admission added evaluator for: endpoints
	I0120 17:43:47.156822       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 17:43:48.087063       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0120 17:43:48.749417       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0120 17:43:48.822899       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0120 17:43:57.253673       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0120 17:44:03.986816       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0120 17:44:04.096335       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0120 17:44:19.554892       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:44:19.554937       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:44:19.554971       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 17:44:51.192186       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:44:51.192232       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:44:51.192265       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 17:45:29.900908       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:45:29.900951       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:45:29.900960       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [f8793ba82cf05274adcb102cc031e89b494647d05487703a2e6335776891d7ad] <==
	I0120 17:48:56.190147       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:48:56.190181       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 17:49:31.253359       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:49:31.253405       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:49:31.253415       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0120 17:49:36.356501       1 handler_proxy.go:102] no RequestInfo found in the context
	E0120 17:49:36.356702       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0120 17:49:36.356720       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 17:50:05.104413       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:50:05.104482       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:50:05.104492       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 17:50:43.727916       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:50:43.727965       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:50:43.728131       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 17:51:23.081480       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:51:23.081528       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:51:23.081537       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0120 17:51:34.244674       1 handler_proxy.go:102] no RequestInfo found in the context
	E0120 17:51:34.244741       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0120 17:51:34.244755       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 17:52:01.897998       1 client.go:360] parsed scheme: "passthrough"
	I0120 17:52:01.898055       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 17:52:01.898064       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [6be5e3acc8c3c6d95563d09dc634e22600322e620b5b1a87930f662c6c46a26f] <==
	I0120 17:44:04.086808       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-145659" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0120 17:44:04.091989       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-65gdx"
	I0120 17:44:04.099772       1 range_allocator.go:373] Set node old-k8s-version-145659 PodCIDR to [10.244.0.0/24]
	I0120 17:44:04.100084       1 shared_informer.go:247] Caches are synced for expand 
	E0120 17:44:04.124613       1 range_allocator.go:361] Node old-k8s-version-145659 already has a CIDR allocated [10.244.0.0/24]. Releasing the new one.
	E0120 17:44:04.136086       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0120 17:44:04.137229       1 shared_informer.go:247] Caches are synced for resource quota 
	I0120 17:44:04.137362       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gtjp2"
	E0120 17:44:04.157964       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0120 17:44:04.159249       1 shared_informer.go:247] Caches are synced for resource quota 
	I0120 17:44:04.184238       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mxqgj"
	I0120 17:44:04.203457       1 shared_informer.go:247] Caches are synced for attach detach 
	I0120 17:44:04.205063       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lqrj9"
	I0120 17:44:04.299782       1 shared_informer.go:247] Caches are synced for persistent volume 
	E0120 17:44:04.300497       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"49a018a0-e8a9-49ea-a4f8-032b341ec2c5", ResourceVersion:"258", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63872991828, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001675c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001675c80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001675ca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001a0e280), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001675
cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001675ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001675d20)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40019e8ba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000dc2a58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a6d490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002f3d58)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000dc2aa8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0120 17:44:04.411159       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0120 17:44:04.711619       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0120 17:44:04.729775       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0120 17:44:04.729799       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0120 17:44:05.365440       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0120 17:44:05.401172       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-65gdx"
	I0120 17:44:09.036782       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0120 17:45:52.714577       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0120 17:45:52.920094       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0120 17:45:52.928601       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [c5b412b8a50edbf359cb53f9057e409ec42325feb14760bd8586181881ec567c] <==
	E0120 17:48:24.790405       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:48:30.388263       1 request.go:655] Throttling request took 1.048091094s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 17:48:31.241431       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:48:55.292217       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:49:02.892477       1 request.go:655] Throttling request took 1.047991938s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 17:49:03.743947       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:49:25.794091       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:49:35.394526       1 request.go:655] Throttling request took 1.048487874s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0120 17:49:36.245593       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:49:56.296073       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:50:07.896224       1 request.go:655] Throttling request took 1.048326156s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 17:50:08.747517       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:50:26.798212       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:50:40.397979       1 request.go:655] Throttling request took 1.048255406s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 17:50:41.249198       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:50:57.300029       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:51:12.899756       1 request.go:655] Throttling request took 1.048463917s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0120 17:51:13.751175       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:51:27.801946       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:51:45.401766       1 request.go:655] Throttling request took 1.048422857s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W0120 17:51:46.253066       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:51:58.304342       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 17:52:17.903502       1 request.go:655] Throttling request took 1.04843924s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0120 17:52:18.754875       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 17:52:28.814922       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [6be91b040ecf6433b067acc687aba794141e52a0496908cb1ffd45b7e5c4bf42] <==
	I0120 17:44:06.502728       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0120 17:44:06.502816       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0120 17:44:06.531563       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0120 17:44:06.531669       1 server_others.go:185] Using iptables Proxier.
	I0120 17:44:06.531921       1 server.go:650] Version: v1.20.0
	I0120 17:44:06.532433       1 config.go:315] Starting service config controller
	I0120 17:44:06.532442       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0120 17:44:06.532457       1 config.go:224] Starting endpoint slice config controller
	I0120 17:44:06.532461       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0120 17:44:06.632523       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0120 17:44:06.632588       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [dcaa2ffccfffd42e8f8249b804261da00f6fb1d290ef7e3f2510a27730af436e] <==
	I0120 17:46:35.537525       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0120 17:46:35.537681       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0120 17:46:35.577936       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0120 17:46:35.578207       1 server_others.go:185] Using iptables Proxier.
	I0120 17:46:35.578587       1 server.go:650] Version: v1.20.0
	I0120 17:46:35.579663       1 config.go:315] Starting service config controller
	I0120 17:46:35.579751       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0120 17:46:35.579823       1 config.go:224] Starting endpoint slice config controller
	I0120 17:46:35.579868       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0120 17:46:35.679926       1 shared_informer.go:247] Caches are synced for service config 
	I0120 17:46:35.680047       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [2c63e2dabdc916bada8840d33b28d42672bec6f3d6016b4ce73559fbdb05d040] <==
	I0120 17:46:27.554936       1 serving.go:331] Generated self-signed cert in-memory
	W0120 17:46:33.249433       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 17:46:33.249463       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 17:46:33.249478       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 17:46:33.249483       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 17:46:33.504944       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0120 17:46:33.518455       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 17:46:33.520562       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 17:46:33.521669       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0120 17:46:33.621007       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9e7debe8caa85fbd0fac1f9798753446e148e4f62959d6b9774530f88a61fc90] <==
	I0120 17:43:40.516697       1 serving.go:331] Generated self-signed cert in-memory
	W0120 17:43:45.593592       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 17:43:45.593833       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 17:43:45.593946       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 17:43:45.593955       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 17:43:45.671869       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 17:43:45.671903       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 17:43:45.672810       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0120 17:43:45.673044       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0120 17:43:45.686917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 17:43:45.689160       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 17:43:45.689297       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 17:43:45.689362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 17:43:45.689428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 17:43:45.689491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 17:43:45.694813       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 17:43:45.695121       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 17:43:45.695335       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 17:43:45.695664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 17:43:45.695898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 17:43:45.699460       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 17:43:46.551048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 17:43:46.693907       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 17:43:46.741428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0120 17:43:47.172062       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 20 17:50:42 old-k8s-version-145659 kubelet[662]: E0120 17:50:42.404811     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:50:55 old-k8s-version-145659 kubelet[662]: E0120 17:50:55.403674     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: I0120 17:50:56.402944     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:50:56 old-k8s-version-145659 kubelet[662]: E0120 17:50:56.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:51:07 old-k8s-version-145659 kubelet[662]: E0120 17:51:07.403709     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: I0120 17:51:10.403051     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:51:10 old-k8s-version-145659 kubelet[662]: E0120 17:51:10.403931     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:51:18 old-k8s-version-145659 kubelet[662]: E0120 17:51:18.403950     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: I0120 17:51:22.403948     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:51:22 old-k8s-version-145659 kubelet[662]: E0120 17:51:22.404319     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:51:29 old-k8s-version-145659 kubelet[662]: E0120 17:51:29.403725     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: I0120 17:51:33.402925     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:51:33 old-k8s-version-145659 kubelet[662]: E0120 17:51:33.403275     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:51:44 old-k8s-version-145659 kubelet[662]: E0120 17:51:44.407958     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: I0120 17:51:47.403014     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:51:47 old-k8s-version-145659 kubelet[662]: E0120 17:51:47.403443     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:51:57 old-k8s-version-145659 kubelet[662]: E0120 17:51:57.403808     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: I0120 17:52:02.407403     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:52:02 old-k8s-version-145659 kubelet[662]: E0120 17:52:02.408199     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:52:09 old-k8s-version-145659 kubelet[662]: E0120 17:52:09.403693     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: I0120 17:52:14.404937     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:52:14 old-k8s-version-145659 kubelet[662]: E0120 17:52:14.405270     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	Jan 20 17:52:24 old-k8s-version-145659 kubelet[662]: E0120 17:52:24.404016     662 pod_workers.go:191] Error syncing pod 61f37e4b-0dae-419e-bb50-91279d5d8583 ("metrics-server-9975d5f86-wxlv8_kube-system(61f37e4b-0dae-419e-bb50-91279d5d8583)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 17:52:29 old-k8s-version-145659 kubelet[662]: I0120 17:52:29.402951     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7188f8c06a3b3ad2ba31f51a9dbbb41d0a10ed8e0cb8f7eb73a7781b581176ab
	Jan 20 17:52:29 old-k8s-version-145659 kubelet[662]: E0120 17:52:29.403433     662 pod_workers.go:191] Error syncing pod 26ab0be0-c203-49f2-9c52-ad0324900913 ("dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-cl8l4_kubernetes-dashboard(26ab0be0-c203-49f2-9c52-ad0324900913)"
	
	
	==> kubernetes-dashboard [9d777334c1d3aaabc3f6bbc4cd85fd4513441c89027701b7192eeddafd24e1d8] <==
	2025/01/20 17:46:57 Using namespace: kubernetes-dashboard
	2025/01/20 17:46:57 Using in-cluster config to connect to apiserver
	2025/01/20 17:46:57 Using secret token for csrf signing
	2025/01/20 17:46:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/20 17:46:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/20 17:46:57 Successful initial request to the apiserver, version: v1.20.0
	2025/01/20 17:46:57 Generating JWE encryption key
	2025/01/20 17:46:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/20 17:46:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/20 17:46:58 Initializing JWE encryption key from synchronized object
	2025/01/20 17:46:58 Creating in-cluster Sidecar client
	2025/01/20 17:46:58 Serving insecurely on HTTP port: 9090
	2025/01/20 17:46:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:47:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:47:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:48:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:48:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:49:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:49:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:50:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:50:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:51:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:51:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:52:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 17:46:57 Starting overwatch
	
	
	==> storage-provisioner [027296a49530001372b5a774dec7a3bca24dd42d48d49d107c60e9f5d90c5ef2] <==
	I0120 17:47:16.522832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 17:47:16.544962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 17:47:16.545174       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 17:47:33.995668       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 17:47:33.995957       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145659_991e8f58-ec24-4b17-89db-bafb81509e25!
	I0120 17:47:33.996994       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"050d1ce1-85b4-4c9b-b2e6-8b644a582fe8", APIVersion:"v1", ResourceVersion:"831", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-145659_991e8f58-ec24-4b17-89db-bafb81509e25 became leader
	I0120 17:47:34.096938       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-145659_991e8f58-ec24-4b17-89db-bafb81509e25!
	
	
	==> storage-provisioner [91b967b2a1923a1e3672fc4671b23607f36f59b794fffd6fe7abba1953dcddcd] <==
	I0120 17:46:35.359063       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0120 17:47:05.365793       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145659 -n old-k8s-version-145659
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-145659 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-wxlv8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-145659 describe pod metrics-server-9975d5f86-wxlv8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-145659 describe pod metrics-server-9975d5f86-wxlv8: exit status 1 (151.528309ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-wxlv8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-145659 describe pod metrics-server-9975d5f86-wxlv8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (385.51s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.0/json-events 5.7
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.1
18 TestDownloadOnly/v1.32.0/DeleteAll 0.22
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 216.28
29 TestAddons/serial/Volcano 39.97
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.89
35 TestAddons/parallel/Registry 17.52
36 TestAddons/parallel/Ingress 19.27
37 TestAddons/parallel/InspektorGadget 10.86
38 TestAddons/parallel/MetricsServer 6.93
40 TestAddons/parallel/CSI 52.52
41 TestAddons/parallel/Headlamp 16.1
42 TestAddons/parallel/CloudSpanner 6.73
43 TestAddons/parallel/LocalPath 53.14
44 TestAddons/parallel/NvidiaDevicePlugin 5.94
45 TestAddons/parallel/Yakd 11.94
47 TestAddons/StoppedEnableDisable 12.29
48 TestCertOptions 37.92
49 TestCertExpiration 228.77
51 TestForceSystemdFlag 42.73
52 TestForceSystemdEnv 43.32
53 TestDockerEnvContainerd 43.73
58 TestErrorSpam/setup 30.36
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 1.83
62 TestErrorSpam/unpause 1.84
63 TestErrorSpam/stop 1.48
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.97
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.32
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.35
75 TestFunctional/serial/CacheCmd/cache/add_local 1.26
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 45.13
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.73
86 TestFunctional/serial/LogsFileCmd 1.84
87 TestFunctional/serial/InvalidService 4.41
89 TestFunctional/parallel/ConfigCmd 0.54
90 TestFunctional/parallel/DashboardCmd 8.85
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.25
97 TestFunctional/parallel/ServiceCmdConnect 6.75
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 24.8
101 TestFunctional/parallel/SSHCmd 0.52
102 TestFunctional/parallel/CpCmd 2.05
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.17
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.33
114 TestFunctional/parallel/Version/short 0.09
115 TestFunctional/parallel/Version/components 1.24
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.45
121 TestFunctional/parallel/ImageCommands/Setup 0.73
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.44
127 TestFunctional/parallel/ServiceCmd/DeployApp 11.29
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.35
138 TestFunctional/parallel/ServiceCmd/List 0.35
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
141 TestFunctional/parallel/ServiceCmd/Format 0.37
142 TestFunctional/parallel/ServiceCmd/URL 0.35
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
150 TestFunctional/parallel/ProfileCmd/profile_list 0.41
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
152 TestFunctional/parallel/MountCmd/any-port 7.83
153 TestFunctional/parallel/MountCmd/specific-port 1.84
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
155 TestFunctional/delete_echo-server_images 0.06
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 117
162 TestMultiControlPlane/serial/DeployApp 31.76
163 TestMultiControlPlane/serial/PingHostFromPods 1.66
164 TestMultiControlPlane/serial/AddWorkerNode 22.16
165 TestMultiControlPlane/serial/NodeLabels 0.11
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
167 TestMultiControlPlane/serial/CopyFile 19.15
168 TestMultiControlPlane/serial/StopSecondaryNode 12.83
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
170 TestMultiControlPlane/serial/RestartSecondaryNode 28.76
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.15
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 112.48
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.72
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
175 TestMultiControlPlane/serial/StopCluster 25.12
176 TestMultiControlPlane/serial/RestartCluster 86.76
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
178 TestMultiControlPlane/serial/AddSecondaryNode 43.29
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
183 TestJSONOutput/start/Command 84.24
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 1.27
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.26
208 TestKicCustomNetwork/create_custom_network 40.95
209 TestKicCustomNetwork/use_default_bridge_network 36.26
210 TestKicExistingNetwork 37.37
211 TestKicCustomSubnet 33.46
212 TestKicStaticIP 32.27
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 71.49
217 TestMountStart/serial/StartWithMountFirst 6.31
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 6.37
220 TestMountStart/serial/VerifyMountSecond 0.28
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.21
224 TestMountStart/serial/RestartStopped 7.38
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 91.48
229 TestMultiNode/serial/DeployApp2Nodes 18.45
230 TestMultiNode/serial/PingHostFrom2Pods 1
231 TestMultiNode/serial/AddNode 18.98
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.65
234 TestMultiNode/serial/CopyFile 9.96
235 TestMultiNode/serial/StopNode 2.26
236 TestMultiNode/serial/StartAfterStop 10.31
237 TestMultiNode/serial/RestartKeepsNodes 84.54
238 TestMultiNode/serial/DeleteNode 5.3
239 TestMultiNode/serial/StopMultiNode 23.86
240 TestMultiNode/serial/RestartMultiNode 56.92
241 TestMultiNode/serial/ValidateNameConflict 35.69
246 TestPreload 115.37
248 TestScheduledStopUnix 106.46
251 TestInsufficientStorage 13.25
252 TestRunningBinaryUpgrade 88.38
254 TestKubernetesUpgrade 345.19
255 TestMissingContainerUpgrade 164.23
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 41.62
259 TestNoKubernetes/serial/StartWithStopK8s 17.94
260 TestNoKubernetes/serial/Start 7.73
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
262 TestNoKubernetes/serial/ProfileList 1.24
263 TestNoKubernetes/serial/Stop 1.28
264 TestNoKubernetes/serial/StartNoArgs 8.25
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
266 TestStoppedBinaryUpgrade/Setup 0.91
267 TestStoppedBinaryUpgrade/Upgrade 102.09
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
277 TestPause/serial/Start 49.24
278 TestPause/serial/SecondStartNoReconfiguration 7.24
279 TestPause/serial/Pause 1.06
280 TestPause/serial/VerifyStatus 0.58
281 TestPause/serial/Unpause 1.13
282 TestPause/serial/PauseAgain 1.21
283 TestPause/serial/DeletePaused 3.15
284 TestPause/serial/VerifyDeletedResources 0.53
292 TestNetworkPlugins/group/false 5.08
297 TestStartStop/group/old-k8s-version/serial/FirstStart 155.19
298 TestStartStop/group/old-k8s-version/serial/DeployApp 10.69
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.79
300 TestStartStop/group/old-k8s-version/serial/Stop 12.34
302 TestStartStop/group/embed-certs/serial/FirstStart 83.97
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
305 TestStartStop/group/embed-certs/serial/DeployApp 8.39
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
307 TestStartStop/group/embed-certs/serial/Stop 12
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
309 TestStartStop/group/embed-certs/serial/SecondStart 268.51
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/embed-certs/serial/Pause 3.13
315 TestStartStop/group/no-preload/serial/FirstStart 76.04
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
319 TestStartStop/group/old-k8s-version/serial/Pause 3.73
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.36
322 TestStartStop/group/no-preload/serial/DeployApp 9.36
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
325 TestStartStop/group/no-preload/serial/Stop 12.17
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/no-preload/serial/SecondStart 300.95
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.12
332 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
336 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
337 TestStartStop/group/no-preload/serial/Pause 3.04
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.12
341 TestStartStop/group/newest-cni/serial/FirstStart 44.32
342 TestNetworkPlugins/group/auto/Start 60.89
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.56
345 TestStartStop/group/newest-cni/serial/Stop 1.29
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
347 TestStartStop/group/newest-cni/serial/SecondStart 16.89
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
351 TestStartStop/group/newest-cni/serial/Pause 3.53
352 TestNetworkPlugins/group/auto/KubeletFlags 0.51
353 TestNetworkPlugins/group/auto/NetCatPod 13.46
354 TestNetworkPlugins/group/kindnet/Start 87.89
355 TestNetworkPlugins/group/auto/DNS 0.34
356 TestNetworkPlugins/group/auto/Localhost 0.2
357 TestNetworkPlugins/group/auto/HairPin 0.23
358 TestNetworkPlugins/group/calico/Start 62.22
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
361 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/DNS 0.23
364 TestNetworkPlugins/group/kindnet/Localhost 0.17
365 TestNetworkPlugins/group/kindnet/HairPin 0.18
366 TestNetworkPlugins/group/calico/KubeletFlags 0.3
367 TestNetworkPlugins/group/calico/NetCatPod 10.31
368 TestNetworkPlugins/group/calico/DNS 0.3
369 TestNetworkPlugins/group/calico/Localhost 0.27
370 TestNetworkPlugins/group/calico/HairPin 0.21
371 TestNetworkPlugins/group/custom-flannel/Start 54.96
372 TestNetworkPlugins/group/enable-default-cni/Start 51.22
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
375 TestNetworkPlugins/group/custom-flannel/DNS 0.18
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.29
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
383 TestNetworkPlugins/group/flannel/Start 59.31
384 TestNetworkPlugins/group/bridge/Start 82.07
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
387 TestNetworkPlugins/group/flannel/NetCatPod 10.28
388 TestNetworkPlugins/group/flannel/DNS 0.19
389 TestNetworkPlugins/group/flannel/Localhost 0.16
390 TestNetworkPlugins/group/flannel/HairPin 0.18
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
392 TestNetworkPlugins/group/bridge/NetCatPod 10.43
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.14
395 TestNetworkPlugins/group/bridge/HairPin 0.23
x
+
TestDownloadOnly/v1.20.0/json-events (6.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-502709 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-502709 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.984688474s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 16:58:17.192848    7844 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 16:58:17.192926    7844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-502709
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-502709: exit status 85 (95.271227ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-502709 | jenkins | v1.35.0 | 20 Jan 25 16:58 UTC |          |
	|         | -p download-only-502709        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:58:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:58:10.253721    7849 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:58:10.253843    7849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:58:10.253878    7849 out.go:358] Setting ErrFile to fd 2...
	I0120 16:58:10.253899    7849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:58:10.254261    7849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	W0120 16:58:10.254461    7849 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20109-2518/.minikube/config/config.json: open /home/jenkins/minikube-integration/20109-2518/.minikube/config/config.json: no such file or directory
	I0120 16:58:10.254936    7849 out.go:352] Setting JSON to true
	I0120 16:58:10.255818    7849 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2435,"bootTime":1737389856,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 16:58:10.255951    7849 start.go:139] virtualization:  
	I0120 16:58:10.260696    7849 out.go:97] [download-only-502709] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0120 16:58:10.260912    7849 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 16:58:10.261011    7849 notify.go:220] Checking for updates...
	I0120 16:58:10.265007    7849 out.go:169] MINIKUBE_LOCATION=20109
	I0120 16:58:10.268385    7849 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:58:10.271334    7849 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 16:58:10.274335    7849 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 16:58:10.277176    7849 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 16:58:10.282822    7849 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 16:58:10.283099    7849 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:58:10.314596    7849 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 16:58:10.314699    7849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 16:58:10.668082    7849 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 16:58:10.65879591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 16:58:10.668188    7849 docker.go:318] overlay module found
	I0120 16:58:10.671117    7849 out.go:97] Using the docker driver based on user configuration
	I0120 16:58:10.671149    7849 start.go:297] selected driver: docker
	I0120 16:58:10.671157    7849 start.go:901] validating driver "docker" against <nil>
	I0120 16:58:10.671272    7849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 16:58:10.729168    7849 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 16:58:10.720474214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 16:58:10.729377    7849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:58:10.729666    7849 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 16:58:10.729881    7849 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 16:58:10.733091    7849 out.go:169] Using Docker driver with root privileges
	I0120 16:58:10.735843    7849 cni.go:84] Creating CNI manager for ""
	I0120 16:58:10.735905    7849 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 16:58:10.735917    7849 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 16:58:10.735998    7849 start.go:340] cluster config:
	{Name:download-only-502709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-502709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:58:10.739040    7849 out.go:97] Starting "download-only-502709" primary control-plane node in "download-only-502709" cluster
	I0120 16:58:10.739074    7849 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 16:58:10.742007    7849 out.go:97] Pulling base image v0.0.46 ...
	I0120 16:58:10.742033    7849 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 16:58:10.742164    7849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 16:58:10.758213    7849 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 16:58:10.758365    7849 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 16:58:10.758467    7849 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 16:58:10.794136    7849 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 16:58:10.794177    7849 cache.go:56] Caching tarball of preloaded images
	I0120 16:58:10.794320    7849 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 16:58:10.797701    7849 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 16:58:10.797727    7849 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 16:58:10.887389    7849 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-502709 host does not exist
	  To start a cluster, run: "minikube start -p download-only-502709"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-502709
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (5.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-238280 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-238280 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.70455469s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (5.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 16:58:23.366106    7844 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 16:58:23.366149    7844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-238280
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-238280: exit status 85 (94.821302ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-502709 | jenkins | v1.35.0 | 20 Jan 25 16:58 UTC |                     |
	|         | -p download-only-502709        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 16:58 UTC | 20 Jan 25 16:58 UTC |
	| delete  | -p download-only-502709        | download-only-502709 | jenkins | v1.35.0 | 20 Jan 25 16:58 UTC | 20 Jan 25 16:58 UTC |
	| start   | -o=json --download-only        | download-only-238280 | jenkins | v1.35.0 | 20 Jan 25 16:58 UTC |                     |
	|         | -p download-only-238280        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 16:58:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 16:58:17.711913    8049 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:58:17.712171    8049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:58:17.712185    8049 out.go:358] Setting ErrFile to fd 2...
	I0120 16:58:17.712191    8049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:58:17.712602    8049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 16:58:17.713291    8049 out.go:352] Setting JSON to true
	I0120 16:58:17.714013    8049 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2442,"bootTime":1737389856,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 16:58:17.714083    8049 start.go:139] virtualization:  
	I0120 16:58:17.717167    8049 out.go:97] [download-only-238280] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 16:58:17.717379    8049 notify.go:220] Checking for updates...
	I0120 16:58:17.719976    8049 out.go:169] MINIKUBE_LOCATION=20109
	I0120 16:58:17.722900    8049 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:58:17.725652    8049 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 16:58:17.728352    8049 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 16:58:17.731459    8049 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 16:58:17.737486    8049 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 16:58:17.737752    8049 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:58:17.762417    8049 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 16:58:17.762511    8049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 16:58:17.819527    8049 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 16:58:17.810611018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 16:58:17.819634    8049 docker.go:318] overlay module found
	I0120 16:58:17.822817    8049 out.go:97] Using the docker driver based on user configuration
	I0120 16:58:17.822838    8049 start.go:297] selected driver: docker
	I0120 16:58:17.822844    8049 start.go:901] validating driver "docker" against <nil>
	I0120 16:58:17.822951    8049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 16:58:17.877817    8049 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 16:58:17.869195755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 16:58:17.878050    8049 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 16:58:17.878348    8049 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 16:58:17.878512    8049 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 16:58:17.881263    8049 out.go:169] Using Docker driver with root privileges
	I0120 16:58:17.883640    8049 cni.go:84] Creating CNI manager for ""
	I0120 16:58:17.883707    8049 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 16:58:17.883721    8049 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 16:58:17.883802    8049 start.go:340] cluster config:
	{Name:download-only-238280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-238280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 16:58:17.886313    8049 out.go:97] Starting "download-only-238280" primary control-plane node in "download-only-238280" cluster
	I0120 16:58:17.886334    8049 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 16:58:17.888956    8049 out.go:97] Pulling base image v0.0.46 ...
	I0120 16:58:17.888988    8049 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 16:58:17.889049    8049 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 16:58:17.905118    8049 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 16:58:17.905245    8049 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 16:58:17.905263    8049 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0120 16:58:17.905267    8049 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0120 16:58:17.905274    8049 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0120 16:58:17.953540    8049 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I0120 16:58:17.953570    8049 cache.go:56] Caching tarball of preloaded images
	I0120 16:58:17.953758    8049 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 16:58:17.956536    8049 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I0120 16:58:17.956560    8049 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 16:58:18.040685    8049 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:bf17808bb02e2942f486582f7290de30 -> /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I0120 16:58:21.743431    8049 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 16:58:21.743595    8049 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20109-2518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 16:58:22.613260    8049 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 16:58:22.613627    8049 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/download-only-238280/config.json ...
	I0120 16:58:22.613661    8049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/download-only-238280/config.json: {Name:mkf4b385746582aa0e45d688f2507abaede1015c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 16:58:22.613856    8049 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 16:58:22.614009    8049 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20109-2518/.minikube/cache/linux/arm64/v1.32.0/kubectl
	
	
	* The control-plane node download-only-238280 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238280"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-238280
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 16:58:24.689123    7844 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-782328 --alsologtostderr --binary-mirror http://127.0.0.1:46297 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-782328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-782328
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-168570
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-168570: exit status 85 (79.281314ms)

                                                
                                                
-- stdout --
	* Profile "addons-168570" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-168570"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-168570
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-168570: exit status 85 (80.273644ms)

                                                
                                                
-- stdout --
	* Profile "addons-168570" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-168570"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (216.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-168570 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-168570 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.28011521s)
--- PASS: TestAddons/Setup (216.28s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.97s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 53.147924ms
addons_test.go:807: volcano-scheduler stabilized in 53.350378ms
addons_test.go:815: volcano-admission stabilized in 53.489062ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-q8cvh" [d1c6cbb9-0730-4e52-b0d1-798c8c155a26] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004047854s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-qbfnz" [0eb48021-28e9-4b1d-ab88-5987805180e9] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003349411s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-m57qx" [94b1eae2-45ad-4207-8247-03855775167c] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005011244s
addons_test.go:842: (dbg) Run:  kubectl --context addons-168570 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-168570 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-168570 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b42a3eef-3cfb-4de5-b5f3-bc7336677b22] Pending
helpers_test.go:344: "test-job-nginx-0" [b42a3eef-3cfb-4de5-b5f3-bc7336677b22] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b42a3eef-3cfb-4de5-b5f3-bc7336677b22] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004251353s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable volcano --alsologtostderr -v=1: (11.25872161s)
--- PASS: TestAddons/serial/Volcano (39.97s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-168570 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-168570 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-168570 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-168570 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6ccd59df-5349-4124-8a46-8ce6ee373cbf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6ccd59df-5349-4124-8a46-8ce6ee373cbf] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004597006s
addons_test.go:633: (dbg) Run:  kubectl --context addons-168570 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-168570 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-168570 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-168570 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.345691ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-5b8dr" [05593ef4-bf60-419f-be5c-230e20a972aa] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006702472s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b77mj" [f61bde6f-ff8c-44e7-b639-6fd9725a086e] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004315376s
addons_test.go:331: (dbg) Run:  kubectl --context addons-168570 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-168570 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-168570 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.349090647s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 ip
2025/01/20 17:03:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-168570 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-168570 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-168570 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b7409cc7-ae20-4183-8a36-6b5e3e4e1dc8] Pending
helpers_test.go:344: "nginx" [b7409cc7-ae20-4183-8a36-6b5e3e4e1dc8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003842911s
I0120 17:04:38.951733    7844 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-168570 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable ingress-dns --alsologtostderr -v=1: (1.78052714s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable ingress --alsologtostderr -v=1: (7.89992563s)
--- PASS: TestAddons/parallel/Ingress (19.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h57gv" [9c7466f4-d362-4f83-bd13-b186a6727b52] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004545837s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable inspektor-gadget --alsologtostderr -v=1: (5.850969546s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.405878ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-68c6w" [dd1ff78c-9d13-4102-b4df-d8ff3855e5bf] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007215796s
addons_test.go:402: (dbg) Run:  kubectl --context addons-168570 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 17:03:42.605052    7844 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 17:03:42.610470    7844 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 17:03:42.610937    7844 kapi.go:107] duration metric: took 8.942429ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.197348ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-168570 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-168570 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7fa0864a-9a5d-443e-9242-89f437eda20e] Pending
helpers_test.go:344: "task-pv-pod" [7fa0864a-9a5d-443e-9242-89f437eda20e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7fa0864a-9a5d-443e-9242-89f437eda20e] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00365015s
addons_test.go:511: (dbg) Run:  kubectl --context addons-168570 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-168570 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-168570 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-168570 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-168570 delete pod task-pv-pod: (1.298191318s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-168570 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-168570 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-168570 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [be433261-aad7-4028-bcbe-4b84efc4926e] Pending
helpers_test.go:344: "task-pv-pod-restore" [be433261-aad7-4028-bcbe-4b84efc4926e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [be433261-aad7-4028-bcbe-4b84efc4926e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00861007s
addons_test.go:553: (dbg) Run:  kubectl --context addons-168570 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-168570 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-168570 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.838828722s)
--- PASS: TestAddons/parallel/CSI (52.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-168570 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-168570 --alsologtostderr -v=1: (1.09360369s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-wd27s" [a97cc616-3054-4a54-aa60-bf65e351259b] Pending
helpers_test.go:344: "headlamp-69d78d796f-wd27s" [a97cc616-3054-4a54-aa60-bf65e351259b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-wd27s" [a97cc616-3054-4a54-aa60-bf65e351259b] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.006904055s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable headlamp --alsologtostderr -v=1: (5.995304159s)
--- PASS: TestAddons/parallel/Headlamp (16.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-7rck9" [578c6b4c-8d97-462a-9f2c-160b0e8839c1] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004423131s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-168570 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-168570 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5b2609e9-27f1-4551-a9bd-57e5cc0cb303] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5b2609e9-27f1-4551-a9bd-57e5cc0cb303] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5b2609e9-27f1-4551-a9bd-57e5cc0cb303] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004178206s
addons_test.go:906: (dbg) Run:  kubectl --context addons-168570 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 ssh "cat /opt/local-path-provisioner/pvc-cc329a19-7466-4efa-b851-cd929ee7f31b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-168570 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-168570 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.741052924s)
--- PASS: TestAddons/parallel/LocalPath (53.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2dgln" [80f6dc96-aa1e-4305-b2a4-ccdfe7b6892f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006838673s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.94s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-2z47v" [395e47ef-0c98-419a-9a9b-681d8120719e] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003555135s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-168570 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-168570 addons disable yakd --alsologtostderr -v=1: (5.931723405s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-168570
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-168570: (11.978484523s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-168570
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-168570
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-168570
--- PASS: TestAddons/StoppedEnableDisable (12.29s)

                                                
                                    
x
+
TestCertOptions (37.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-779915 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-779915 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.219712449s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-779915 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-779915 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-779915 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-779915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-779915
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-779915: (2.030384221s)
--- PASS: TestCertOptions (37.92s)

                                                
                                    
x
+
TestCertExpiration (228.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-156373 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-156373 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.96773217s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-156373 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-156373 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.360932426s)
helpers_test.go:175: Cleaning up "cert-expiration-156373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-156373
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-156373: (2.434858704s)
--- PASS: TestCertExpiration (228.77s)

                                                
                                    
x
+
TestForceSystemdFlag (42.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-832515 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-832515 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.888754825s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-832515 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-832515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-832515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-832515: (2.359107597s)
--- PASS: TestForceSystemdFlag (42.73s)

                                                
                                    
x
+
TestForceSystemdEnv (43.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-062715 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0120 17:42:01.700973    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-062715 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.597473681s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-062715 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-062715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-062715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-062715: (2.347205989s)
--- PASS: TestForceSystemdEnv (43.32s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.73s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-891881 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-891881 --driver=docker  --container-runtime=containerd: (28.071314653s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-891881"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FsGIieH2i14l/agent.28941" SSH_AGENT_PID="28942" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FsGIieH2i14l/agent.28941" SSH_AGENT_PID="28942" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FsGIieH2i14l/agent.28941" SSH_AGENT_PID="28942" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.254222276s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FsGIieH2i14l/agent.28941" SSH_AGENT_PID="28942" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-891881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-891881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-891881: (2.005727005s)
--- PASS: TestDockerEnvContainerd (43.73s)

                                                
                                    
x
+
TestErrorSpam/setup (30.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-214678 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-214678 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-214678 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-214678 --driver=docker  --container-runtime=containerd: (30.355177737s)
--- PASS: TestErrorSpam/setup (30.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 stop: (1.284931632s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-214678 --log_dir /tmp/nospam-214678 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20109-2518/.minikube/files/etc/test/nested/copy/7844/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659288 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0120 17:07:01.702378    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:01.708754    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:01.720153    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:01.741539    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:01.782890    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:01.864318    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:02.025706    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:02.347356    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:02.989477    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:04.271287    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:06.833032    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:11.955409    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:07:22.197250    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-659288 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.967427115s)
--- PASS: TestFunctional/serial/StartWithProxy (52.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 17:07:27.038243    7844 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659288 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-659288 --alsologtostderr -v=8: (6.316037992s)
functional_test.go:663: soft start took 6.317449576s for "functional-659288" cluster.
I0120 17:07:33.354612    7844 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (6.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-659288 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 cache add registry.k8s.io/pause:3.1: (1.537104093s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 cache add registry.k8s.io/pause:3.3: (1.507030996s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 cache add registry.k8s.io/pause:latest: (1.302495219s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-659288 /tmp/TestFunctionalserialCacheCmdcacheadd_local1619466466/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cache add minikube-local-cache-test:functional-659288
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cache delete minikube-local-cache-test:functional-659288
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-659288
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (322.146278ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 cache reload: (1.145925114s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 kubectl -- --context functional-659288 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-659288 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0120 17:07:42.678645    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:08:23.639996    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-659288 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.132392718s)
functional_test.go:761: restart took 45.13250604s for "functional-659288" cluster.
I0120 17:08:27.170688    7844 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (45.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-659288 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 logs: (1.725491069s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 logs --file /tmp/TestFunctionalserialLogsFileCmd3278109301/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 logs --file /tmp/TestFunctionalserialLogsFileCmd3278109301/001/logs.txt: (1.83795477s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-659288 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-659288
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-659288: exit status 115 (572.784435ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31627 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-659288 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 config get cpus: exit status 14 (62.402764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 config get cpus: exit status 14 (97.016303ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-659288 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-659288 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45915: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-659288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.51467ms)

                                                
                                                
-- stdout --
	* [functional-659288] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:09:15.458043   45621 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:09:15.458231   45621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:09:15.458253   45621 out.go:358] Setting ErrFile to fd 2...
	I0120 17:09:15.458274   45621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:09:15.462873   45621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:09:15.463311   45621 out.go:352] Setting JSON to false
	I0120 17:09:15.465316   45621 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3100,"bootTime":1737389856,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 17:09:15.465432   45621 start.go:139] virtualization:  
	I0120 17:09:15.472591   45621 out.go:177] * [functional-659288] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 17:09:15.475471   45621 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 17:09:15.475609   45621 notify.go:220] Checking for updates...
	I0120 17:09:15.481092   45621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 17:09:15.483929   45621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:09:15.486840   45621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 17:09:15.489694   45621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 17:09:15.492469   45621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 17:09:15.496109   45621 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:09:15.496824   45621 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 17:09:15.523890   45621 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 17:09:15.524036   45621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:09:15.585604   45621 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 17:09:15.576357253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:09:15.585721   45621 docker.go:318] overlay module found
	I0120 17:09:15.588971   45621 out.go:177] * Using the docker driver based on existing profile
	I0120 17:09:15.591862   45621 start.go:297] selected driver: docker
	I0120 17:09:15.591886   45621 start.go:901] validating driver "docker" against &{Name:functional-659288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-659288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:09:15.591994   45621 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 17:09:15.595577   45621 out.go:201] 
	W0120 17:09:15.598536   45621 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 17:09:15.601362   45621 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659288 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-659288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-659288 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (203.957509ms)

                                                
                                                
-- stdout --
	* [functional-659288] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:09:15.930704   45739 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:09:15.930839   45739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:09:15.930851   45739 out.go:358] Setting ErrFile to fd 2...
	I0120 17:09:15.930857   45739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:09:15.931769   45739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:09:15.932245   45739 out.go:352] Setting JSON to false
	I0120 17:09:15.933195   45739 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3100,"bootTime":1737389856,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 17:09:15.933287   45739 start.go:139] virtualization:  
	I0120 17:09:15.936795   45739 out.go:177] * [functional-659288] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0120 17:09:15.939792   45739 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 17:09:15.939843   45739 notify.go:220] Checking for updates...
	I0120 17:09:15.945798   45739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 17:09:15.948616   45739 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:09:15.951512   45739 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 17:09:15.954377   45739 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 17:09:15.957331   45739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 17:09:15.960600   45739 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:09:15.961187   45739 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 17:09:15.990077   45739 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 17:09:15.990199   45739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:09:16.053085   45739 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 17:09:16.044197046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:09:16.053201   45739 docker.go:318] overlay module found
	I0120 17:09:16.058205   45739 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0120 17:09:16.061093   45739 start.go:297] selected driver: docker
	I0120 17:09:16.061117   45739 start.go:901] validating driver "docker" against &{Name:functional-659288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-659288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 17:09:16.061288   45739 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 17:09:16.064790   45739 out.go:201] 
	W0120 17:09:16.067818   45739 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 17:09:16.070561   45739 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-659288 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-659288 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-74dkt" [7187dd87-6687-4946-8b7d-4e21bbe27b92] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-74dkt" [7187dd87-6687-4946-8b7d-4e21bbe27b92] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004262227s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32326
functional_test.go:1675: http://192.168.49.2:32326: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-74dkt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32326
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [236980b7-d971-459e-9531-3a6a7a7673a8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004221114s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-659288 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-659288 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-659288 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-659288 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb69d858-7977-45bf-b3f8-a8933870f93c] Pending
helpers_test.go:344: "sp-pod" [bb69d858-7977-45bf-b3f8-a8933870f93c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb69d858-7977-45bf-b3f8-a8933870f93c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.009321268s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-659288 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-659288 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-659288 delete -f testdata/storage-provisioner/pod.yaml: (1.590596136s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-659288 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6eecb245-1baf-43a7-afe3-e832f2d4b74f] Pending
helpers_test.go:344: "sp-pod" [6eecb245-1baf-43a7-afe3-e832f2d4b74f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009652245s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-659288 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh -n functional-659288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cp functional-659288:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2539885847/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh -n functional-659288 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh -n functional-659288 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7844/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /etc/test/nested/copy/7844/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7844.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /etc/ssl/certs/7844.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7844.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /usr/share/ca-certificates/7844.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/78442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /etc/ssl/certs/78442.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/78442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /usr/share/ca-certificates/78442.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-659288 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh "sudo systemctl is-active docker": exit status 1 (351.276161ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh "sudo systemctl is-active crio": exit status 1 (335.341292ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 version -o=json --components: (1.237683255s)
--- PASS: TestFunctional/parallel/Version/components (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659288 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-659288
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-659288
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659288 image ls --format short --alsologtostderr:
I0120 17:09:19.794974   46313 out.go:345] Setting OutFile to fd 1 ...
I0120 17:09:19.795484   46313 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:19.795642   46313 out.go:358] Setting ErrFile to fd 2...
I0120 17:09:19.795663   46313 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:19.796121   46313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:09:19.797575   46313 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:19.801027   46313 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:19.802791   46313 cli_runner.go:164] Run: docker container inspect functional-659288 --format={{.State.Status}}
I0120 17:09:19.826991   46313 ssh_runner.go:195] Run: systemctl --version
I0120 17:09:19.827044   46313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659288
I0120 17:09:19.848684   46313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/functional-659288/id_rsa Username:docker}
I0120 17:09:19.938521   46313 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659288 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:781d90 | 68.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-controller-manager     | v1.32.0            | sha256:a8d049 | 24MB   |
| registry.k8s.io/kube-proxy                  | v1.32.0            | sha256:2f5038 | 27.4MB |
| registry.k8s.io/kube-scheduler              | v1.32.0            | sha256:c3ff26 | 18.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| docker.io/kicbase/echo-server               | functional-659288  | sha256:ce2d2c | 2.17MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/kube-apiserver              | v1.32.0            | sha256:2b5bd0 | 26.2MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-659288  | sha256:fab912 | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:f9d642 | 21.6MB |
| localhost/my-image                          | functional-659288  | sha256:939277 | 831kB  |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659288 image ls --format table --alsologtostderr:
I0120 17:09:24.978832   46732 out.go:345] Setting OutFile to fd 1 ...
I0120 17:09:24.978944   46732 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:24.978955   46732 out.go:358] Setting ErrFile to fd 2...
I0120 17:09:24.978960   46732 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:24.979242   46732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:09:24.980692   46732 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:24.980833   46732 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:24.981396   46732 cli_runner.go:164] Run: docker container inspect functional-659288 --format={{.State.Status}}
I0120 17:09:25.005776   46732 ssh_runner.go:195] Run: systemctl --version
I0120 17:09:25.005838   46732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659288
I0120 17:09:25.026812   46732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/functional-659288/id_rsa Username:docker}
I0120 17:09:25.116066   46732 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659288 image ls --format json --alsologtostderr:
[{"id":"sha256:fab91269d9d8fe00c1163e6e594c36b61bb063e11f4dadee2af151e5e5f3d01f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-659288"],"size":"991"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}
,{"id":"sha256:a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"23964889"},{"id":"sha256:2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"27362084"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-659288"],"size":"2173567"},{"id":"sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a
23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21565101"},{"id":"sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"68507108"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1
ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"},{"id":"sha256:2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","repoDigests":["registry.k8s.io/kube-apiserve
r@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"26213662"},{"id":"sha256:c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"18922208"},{"id":"sha256:939277dd00027678145526bb21b1dc7dca03bf7955356d33a1d1f2cc50b673ab","repoDigests":[],"repoTags":["localhost/my-image:functional-659288"],"size":"830618"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659288 image ls --format json --alsologtostderr:
I0120 17:09:24.856241   46709 out.go:345] Setting OutFile to fd 1 ...
I0120 17:09:24.856429   46709 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:24.856455   46709 out.go:358] Setting ErrFile to fd 2...
I0120 17:09:24.856474   46709 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:24.856770   46709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:09:24.857511   46709 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:24.857684   46709 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:24.858240   46709 cli_runner.go:164] Run: docker container inspect functional-659288 --format={{.State.Status}}
I0120 17:09:24.880260   46709 ssh_runner.go:195] Run: systemctl --version
I0120 17:09:24.880313   46709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659288
I0120 17:09:24.898138   46709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/functional-659288/id_rsa Username:docker}
I0120 17:09:24.983914   46709 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-659288 image ls --format yaml --alsologtostderr:
- id: sha256:fab91269d9d8fe00c1163e6e594c36b61bb063e11f4dadee2af151e5e5f3d01f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-659288
size: "991"
- id: sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "68507108"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "26213662"
- id: sha256:a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "23964889"
- id: sha256:2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "27362084"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-659288
size: "2173567"
- id: sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "21565101"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "18922208"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659288 image ls --format yaml --alsologtostderr:
I0120 17:09:20.099728   46344 out.go:345] Setting OutFile to fd 1 ...
I0120 17:09:20.100206   46344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:20.103419   46344 out.go:358] Setting ErrFile to fd 2...
I0120 17:09:20.103485   46344 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:20.103897   46344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:09:20.104989   46344 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:20.105208   46344 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:20.106000   46344 cli_runner.go:164] Run: docker container inspect functional-659288 --format={{.State.Status}}
I0120 17:09:20.128918   46344 ssh_runner.go:195] Run: systemctl --version
I0120 17:09:20.128983   46344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659288
I0120 17:09:20.167100   46344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/functional-659288/id_rsa Username:docker}
I0120 17:09:20.264146   46344 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh pgrep buildkitd: exit status 1 (302.415358ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image build -t localhost/my-image:functional-659288 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 image build -t localhost/my-image:functional-659288 testdata/build --alsologtostderr: (3.909428563s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-659288 image build -t localhost/my-image:functional-659288 testdata/build --alsologtostderr:
I0120 17:09:20.741715   46444 out.go:345] Setting OutFile to fd 1 ...
I0120 17:09:20.741905   46444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:20.741917   46444 out.go:358] Setting ErrFile to fd 2...
I0120 17:09:20.741924   46444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 17:09:20.742169   46444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
I0120 17:09:20.742816   46444 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:20.745601   46444 config.go:182] Loaded profile config "functional-659288": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 17:09:20.746102   46444 cli_runner.go:164] Run: docker container inspect functional-659288 --format={{.State.Status}}
I0120 17:09:20.764704   46444 ssh_runner.go:195] Run: systemctl --version
I0120 17:09:20.764756   46444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-659288
I0120 17:09:20.785177   46444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/functional-659288/id_rsa Username:docker}
I0120 17:09:20.884500   46444 build_images.go:161] Building image from path: /tmp/build.2155126688.tar
I0120 17:09:20.884576   46444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 17:09:20.894813   46444 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2155126688.tar
I0120 17:09:20.899213   46444 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2155126688.tar: stat -c "%s %y" /var/lib/minikube/build/build.2155126688.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2155126688.tar': No such file or directory
I0120 17:09:20.899245   46444 ssh_runner.go:362] scp /tmp/build.2155126688.tar --> /var/lib/minikube/build/build.2155126688.tar (3072 bytes)
I0120 17:09:20.941787   46444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2155126688
I0120 17:09:20.953950   46444 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2155126688 -xf /var/lib/minikube/build/build.2155126688.tar
I0120 17:09:20.967301   46444 containerd.go:394] Building image: /var/lib/minikube/build/build.2155126688
I0120 17:09:20.967471   46444 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2155126688 --local dockerfile=/var/lib/minikube/build/build.2155126688 --output type=image,name=localhost/my-image:functional-659288
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:38319c78bb6b8ba504a5753b67a55942364f9200dc70a3cc872f8af022e2dc87
#8 exporting manifest sha256:38319c78bb6b8ba504a5753b67a55942364f9200dc70a3cc872f8af022e2dc87 0.0s done
#8 exporting config sha256:939277dd00027678145526bb21b1dc7dca03bf7955356d33a1d1f2cc50b673ab 0.0s done
#8 naming to localhost/my-image:functional-659288 done
#8 DONE 0.2s
I0120 17:09:24.531011   46444 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2155126688 --local dockerfile=/var/lib/minikube/build/build.2155126688 --output type=image,name=localhost/my-image:functional-659288: (3.563507085s)
I0120 17:09:24.531085   46444 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2155126688
I0120 17:09:24.541712   46444 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2155126688.tar
I0120 17:09:24.552273   46444 build_images.go:217] Built localhost/my-image:functional-659288 from /tmp/build.2155126688.tar
I0120 17:09:24.552316   46444 build_images.go:133] succeeded building to: functional-659288
I0120 17:09:24.552332   46444 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls
2025/01/20 17:09:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-659288
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image load --daemon kicbase/echo-server:functional-659288 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 image load --daemon kicbase/echo-server:functional-659288 --alsologtostderr: (1.210835929s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image load --daemon kicbase/echo-server:functional-659288 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 image load --daemon kicbase/echo-server:functional-659288 --alsologtostderr: (1.120199159s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-659288 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-659288 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-96vj9" [b27cf9a6-4d19-4754-8d3d-69c7cf0c0ac1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-96vj9" [b27cf9a6-4d19-4754-8d3d-69c7cf0c0ac1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003828349s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-659288
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image load --daemon kicbase/echo-server:functional-659288 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-659288 image load --daemon kicbase/echo-server:functional-659288 --alsologtostderr: (1.161721277s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image save kicbase/echo-server:functional-659288 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image rm kicbase/echo-server:functional-659288 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-659288
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 image save --daemon kicbase/echo-server:functional-659288 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-659288
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-659288 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-659288 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-659288 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-659288 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 42482: os: process already finished
helpers_test.go:502: unable to terminate pid 42361: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-659288 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-659288 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [af9a2433-a9e5-4378-9957-c5c2f3a1a749] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [af9a2433-a9e5-4378-9957-c5c2f3a1a749] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003119449s
I0120 17:08:54.823505    7844 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 service list -o json
functional_test.go:1494: Took "341.065412ms" to run "out/minikube-linux-arm64 -p functional-659288 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30989
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30989
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-659288 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.168.60 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-659288 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "338.183807ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "66.772259ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "351.73414ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "61.446061ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdany-port671214082/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737392944429219069" to /tmp/TestFunctionalparallelMountCmdany-port671214082/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737392944429219069" to /tmp/TestFunctionalparallelMountCmdany-port671214082/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737392944429219069" to /tmp/TestFunctionalparallelMountCmdany-port671214082/001/test-1737392944429219069
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.530862ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 17:09:04.799804    7844 retry.go:31] will retry after 358.199143ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 17:09 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 17:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 17:09 test-1737392944429219069
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh cat /mount-9p/test-1737392944429219069
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-659288 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a39dcb8d-ccd9-4f7e-94ae-01c9b5445ba9] Pending
helpers_test.go:344: "busybox-mount" [a39dcb8d-ccd9-4f7e-94ae-01c9b5445ba9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a39dcb8d-ccd9-4f7e-94ae-01c9b5445ba9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a39dcb8d-ccd9-4f7e-94ae-01c9b5445ba9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00384956s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-659288 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdany-port671214082/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdspecific-port2232104764/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.661043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 17:09:12.631558    7844 retry.go:31] will retry after 431.582943ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdspecific-port2232104764/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-659288 ssh "sudo umount -f /mount-9p": exit status 1 (270.751554ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-659288 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdspecific-port2232104764/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup487054928/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup487054928/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup487054928/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-659288 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-659288 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup487054928/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup487054928/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-659288 /tmp/TestFunctionalparallelMountCmdVerifyCleanup487054928/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-659288
--- PASS: TestFunctional/delete_echo-server_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-659288
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-659288
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (117s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-182375 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 17:09:45.562253    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-182375 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m56.144834172s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (117.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-182375 -- rollout status deployment/busybox: (28.776957382s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-nqprj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-v8n7x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-zrmmc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-nqprj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-v8n7x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-zrmmc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-nqprj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-v8n7x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-zrmmc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-nqprj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-nqprj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-v8n7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-v8n7x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-zrmmc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-182375 -- exec busybox-58667487b6-zrmmc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-182375 -v=7 --alsologtostderr
E0120 17:12:01.701046    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-182375 -v=7 --alsologtostderr: (21.20435949s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-182375 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.018105516s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 status --output json -v=7 --alsologtostderr: (1.027924536s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp testdata/cp-test.txt ha-182375:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2216205379/001/cp-test_ha-182375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375:/home/docker/cp-test.txt ha-182375-m02:/home/docker/cp-test_ha-182375_ha-182375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test_ha-182375_ha-182375-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375:/home/docker/cp-test.txt ha-182375-m03:/home/docker/cp-test_ha-182375_ha-182375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test_ha-182375_ha-182375-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375:/home/docker/cp-test.txt ha-182375-m04:/home/docker/cp-test_ha-182375_ha-182375-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test_ha-182375_ha-182375-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp testdata/cp-test.txt ha-182375-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2216205379/001/cp-test_ha-182375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m02:/home/docker/cp-test.txt ha-182375:/home/docker/cp-test_ha-182375-m02_ha-182375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test_ha-182375-m02_ha-182375.txt"
E0120 17:12:29.404437    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m02:/home/docker/cp-test.txt ha-182375-m03:/home/docker/cp-test_ha-182375-m02_ha-182375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test_ha-182375-m02_ha-182375-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m02:/home/docker/cp-test.txt ha-182375-m04:/home/docker/cp-test_ha-182375-m02_ha-182375-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test_ha-182375-m02_ha-182375-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp testdata/cp-test.txt ha-182375-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2216205379/001/cp-test_ha-182375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m03:/home/docker/cp-test.txt ha-182375:/home/docker/cp-test_ha-182375-m03_ha-182375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test_ha-182375-m03_ha-182375.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m03:/home/docker/cp-test.txt ha-182375-m02:/home/docker/cp-test_ha-182375-m03_ha-182375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test_ha-182375-m03_ha-182375-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m03:/home/docker/cp-test.txt ha-182375-m04:/home/docker/cp-test_ha-182375-m03_ha-182375-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test_ha-182375-m03_ha-182375-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp testdata/cp-test.txt ha-182375-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2216205379/001/cp-test_ha-182375-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m04:/home/docker/cp-test.txt ha-182375:/home/docker/cp-test_ha-182375-m04_ha-182375.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375 "sudo cat /home/docker/cp-test_ha-182375-m04_ha-182375.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m04:/home/docker/cp-test.txt ha-182375-m02:/home/docker/cp-test_ha-182375-m04_ha-182375-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m02 "sudo cat /home/docker/cp-test_ha-182375-m04_ha-182375-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 cp ha-182375-m04:/home/docker/cp-test.txt ha-182375-m03:/home/docker/cp-test_ha-182375-m04_ha-182375-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 ssh -n ha-182375-m03 "sudo cat /home/docker/cp-test_ha-182375-m04_ha-182375-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 node stop m02 -v=7 --alsologtostderr: (12.094850286s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr: exit status 7 (736.866858ms)

                                                
                                                
-- stdout --
	ha-182375
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-182375-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-182375-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-182375-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:12:53.012796   62989 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:12:53.012996   62989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:12:53.013024   62989 out.go:358] Setting ErrFile to fd 2...
	I0120 17:12:53.013042   62989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:12:53.013340   62989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:12:53.013610   62989 out.go:352] Setting JSON to false
	I0120 17:12:53.013679   62989 mustload.go:65] Loading cluster: ha-182375
	I0120 17:12:53.014305   62989 config.go:182] Loaded profile config "ha-182375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:12:53.014364   62989 status.go:174] checking status of ha-182375 ...
	I0120 17:12:53.013780   62989 notify.go:220] Checking for updates...
	I0120 17:12:53.015721   62989 cli_runner.go:164] Run: docker container inspect ha-182375 --format={{.State.Status}}
	I0120 17:12:53.036444   62989 status.go:371] ha-182375 host status = "Running" (err=<nil>)
	I0120 17:12:53.036465   62989 host.go:66] Checking if "ha-182375" exists ...
	I0120 17:12:53.036838   62989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-182375
	I0120 17:12:53.061143   62989 host.go:66] Checking if "ha-182375" exists ...
	I0120 17:12:53.061534   62989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:12:53.061589   62989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-182375
	I0120 17:12:53.079056   62989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/ha-182375/id_rsa Username:docker}
	I0120 17:12:53.174309   62989 ssh_runner.go:195] Run: systemctl --version
	I0120 17:12:53.184688   62989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 17:12:53.205036   62989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:12:53.263889   62989 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-20 17:12:53.254019929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:12:53.264499   62989 kubeconfig.go:125] found "ha-182375" server: "https://192.168.49.254:8443"
	I0120 17:12:53.264534   62989 api_server.go:166] Checking apiserver status ...
	I0120 17:12:53.264578   62989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 17:12:53.276499   62989 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1418/cgroup
	I0120 17:12:53.286327   62989 api_server.go:182] apiserver freezer: "8:freezer:/docker/ebdd170536fb3e88bafc956d919d8f7e2a5a188e984d2083c057e80e59b56fbd/kubepods/burstable/podd4fb69f0015a7e230ba0861e5b9adcf2/c436a814ecc7010edd8984fc2003325f4b4ee8643cbc5fc7e3157958f65de4f7"
	I0120 17:12:53.286407   62989 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ebdd170536fb3e88bafc956d919d8f7e2a5a188e984d2083c057e80e59b56fbd/kubepods/burstable/podd4fb69f0015a7e230ba0861e5b9adcf2/c436a814ecc7010edd8984fc2003325f4b4ee8643cbc5fc7e3157958f65de4f7/freezer.state
	I0120 17:12:53.295428   62989 api_server.go:204] freezer state: "THAWED"
	I0120 17:12:53.295459   62989 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 17:12:53.304876   62989 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 17:12:53.304933   62989 status.go:463] ha-182375 apiserver status = Running (err=<nil>)
	I0120 17:12:53.304943   62989 status.go:176] ha-182375 status: &{Name:ha-182375 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:12:53.304959   62989 status.go:174] checking status of ha-182375-m02 ...
	I0120 17:12:53.305294   62989 cli_runner.go:164] Run: docker container inspect ha-182375-m02 --format={{.State.Status}}
	I0120 17:12:53.323910   62989 status.go:371] ha-182375-m02 host status = "Stopped" (err=<nil>)
	I0120 17:12:53.323934   62989 status.go:384] host is not running, skipping remaining checks
	I0120 17:12:53.323941   62989 status.go:176] ha-182375-m02 status: &{Name:ha-182375-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:12:53.323961   62989 status.go:174] checking status of ha-182375-m03 ...
	I0120 17:12:53.324312   62989 cli_runner.go:164] Run: docker container inspect ha-182375-m03 --format={{.State.Status}}
	I0120 17:12:53.342582   62989 status.go:371] ha-182375-m03 host status = "Running" (err=<nil>)
	I0120 17:12:53.342607   62989 host.go:66] Checking if "ha-182375-m03" exists ...
	I0120 17:12:53.343050   62989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-182375-m03
	I0120 17:12:53.363885   62989 host.go:66] Checking if "ha-182375-m03" exists ...
	I0120 17:12:53.364210   62989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:12:53.364261   62989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-182375-m03
	I0120 17:12:53.381857   62989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/ha-182375-m03/id_rsa Username:docker}
	I0120 17:12:53.469726   62989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 17:12:53.482652   62989 kubeconfig.go:125] found "ha-182375" server: "https://192.168.49.254:8443"
	I0120 17:12:53.482682   62989 api_server.go:166] Checking apiserver status ...
	I0120 17:12:53.482724   62989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 17:12:53.495148   62989 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1351/cgroup
	I0120 17:12:53.506056   62989 api_server.go:182] apiserver freezer: "8:freezer:/docker/4bdc1fdbbbb242d873233659f8d132455258965e40ed8691b0d442726398b98d/kubepods/burstable/pod23e98bdcaec70a4674a183831b592b8b/1f80062fae73a56439948706345ba25d4b9a6f32f6c40de62fb8473335698242"
	I0120 17:12:53.506129   62989 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4bdc1fdbbbb242d873233659f8d132455258965e40ed8691b0d442726398b98d/kubepods/burstable/pod23e98bdcaec70a4674a183831b592b8b/1f80062fae73a56439948706345ba25d4b9a6f32f6c40de62fb8473335698242/freezer.state
	I0120 17:12:53.516749   62989 api_server.go:204] freezer state: "THAWED"
	I0120 17:12:53.516783   62989 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 17:12:53.525455   62989 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 17:12:53.525487   62989 status.go:463] ha-182375-m03 apiserver status = Running (err=<nil>)
	I0120 17:12:53.525510   62989 status.go:176] ha-182375-m03 status: &{Name:ha-182375-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:12:53.525536   62989 status.go:174] checking status of ha-182375-m04 ...
	I0120 17:12:53.525852   62989 cli_runner.go:164] Run: docker container inspect ha-182375-m04 --format={{.State.Status}}
	I0120 17:12:53.548230   62989 status.go:371] ha-182375-m04 host status = "Running" (err=<nil>)
	I0120 17:12:53.548255   62989 host.go:66] Checking if "ha-182375-m04" exists ...
	I0120 17:12:53.548568   62989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-182375-m04
	I0120 17:12:53.567428   62989 host.go:66] Checking if "ha-182375-m04" exists ...
	I0120 17:12:53.567738   62989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:12:53.567802   62989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-182375-m04
	I0120 17:12:53.584860   62989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/ha-182375-m04/id_rsa Username:docker}
	I0120 17:12:53.677119   62989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 17:12:53.688951   62989 status.go:176] ha-182375-m04 status: &{Name:ha-182375-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 node start m02 -v=7 --alsologtostderr: (27.592638586s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr: (1.04031013s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.153146411s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-182375 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-182375 -v=7 --alsologtostderr
E0120 17:13:40.210123    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.216493    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.227941    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.249468    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.290871    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.372329    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.533709    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:40.855384    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:41.497439    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:42.778744    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:45.340554    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:13:50.462643    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:14:00.704690    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-182375 -v=7 --alsologtostderr: (36.967105241s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-182375 --wait=true -v=7 --alsologtostderr
E0120 17:14:21.186803    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:15:02.148657    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-182375 --wait=true -v=7 --alsologtostderr: (1m15.338604292s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-182375
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (112.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 node delete m03 -v=7 --alsologtostderr: (9.755914372s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 stop -v=7 --alsologtostderr: (24.998250004s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr: exit status 7 (122.547636ms)

                                                
                                                
-- stdout --
	ha-182375
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-182375-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-182375-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:15:53.416099   77123 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:15:53.416215   77123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:15:53.416227   77123 out.go:358] Setting ErrFile to fd 2...
	I0120 17:15:53.416232   77123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:15:53.416460   77123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:15:53.416688   77123 out.go:352] Setting JSON to false
	I0120 17:15:53.416724   77123 mustload.go:65] Loading cluster: ha-182375
	I0120 17:15:53.416819   77123 notify.go:220] Checking for updates...
	I0120 17:15:53.417165   77123 config.go:182] Loaded profile config "ha-182375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:15:53.417182   77123 status.go:174] checking status of ha-182375 ...
	I0120 17:15:53.417757   77123 cli_runner.go:164] Run: docker container inspect ha-182375 --format={{.State.Status}}
	I0120 17:15:53.437125   77123 status.go:371] ha-182375 host status = "Stopped" (err=<nil>)
	I0120 17:15:53.437148   77123 status.go:384] host is not running, skipping remaining checks
	I0120 17:15:53.437155   77123 status.go:176] ha-182375 status: &{Name:ha-182375 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:15:53.437213   77123 status.go:174] checking status of ha-182375-m02 ...
	I0120 17:15:53.437518   77123 cli_runner.go:164] Run: docker container inspect ha-182375-m02 --format={{.State.Status}}
	I0120 17:15:53.462893   77123 status.go:371] ha-182375-m02 host status = "Stopped" (err=<nil>)
	I0120 17:15:53.462977   77123 status.go:384] host is not running, skipping remaining checks
	I0120 17:15:53.462985   77123 status.go:176] ha-182375-m02 status: &{Name:ha-182375-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:15:53.463004   77123 status.go:174] checking status of ha-182375-m04 ...
	I0120 17:15:53.463572   77123 cli_runner.go:164] Run: docker container inspect ha-182375-m04 --format={{.State.Status}}
	I0120 17:15:53.486750   77123 status.go:371] ha-182375-m04 host status = "Stopped" (err=<nil>)
	I0120 17:15:53.486772   77123 status.go:384] host is not running, skipping remaining checks
	I0120 17:15:53.486778   77123 status.go:176] ha-182375-m04 status: &{Name:ha-182375-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-182375 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 17:16:24.070000    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:17:01.700748    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-182375 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m25.79427928s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-182375 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-182375 --control-plane -v=7 --alsologtostderr: (42.195055262s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-182375 status -v=7 --alsologtostderr: (1.089852218s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.01737876s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-991683 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0120 17:18:40.211497    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:19:07.915766    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-991683 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m24.23768097s)
--- PASS: TestJSONOutput/start/Command (84.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-991683 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-991683 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.27s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-991683 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-991683 --output=json --user=testUser: (1.272261602s)
--- PASS: TestJSONOutput/stop/Command (1.27s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-194631 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-194631 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (104.398371ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee1d2a93-3110-4a61-a9a8-22e14593fdab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-194631] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebd84561-5ab5-4198-a014-4687a3f7d328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"fd366824-e776-4d2b-a01a-0019c091e24b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c24a3cb0-6efd-4ecd-9ae5-f842b60ee67e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig"}}
	{"specversion":"1.0","id":"c65da0d6-93ed-45c9-ac8f-a677b4a2b0a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube"}}
	{"specversion":"1.0","id":"65c27ba3-1612-44c4-9840-f4b2807c8660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e03f60db-942e-42c5-9a78-8a020867f6a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6af1c9d8-e732-406a-9f98-9f66eb936cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-194631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-194631
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-432546 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-432546 --network=: (38.769980797s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-432546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-432546
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-432546: (2.15055745s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-420461 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-420461 --network=bridge: (34.199823117s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-420461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-420461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-420461: (2.034132257s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.26s)

                                                
                                    
x
+
TestKicExistingNetwork (37.37s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0120 17:21:03.189213    7844 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0120 17:21:03.205560    7844 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0120 17:21:03.205641    7844 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0120 17:21:03.205659    7844 cli_runner.go:164] Run: docker network inspect existing-network
W0120 17:21:03.222595    7844 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0120 17:21:03.222628    7844 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0120 17:21:03.222646    7844 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0120 17:21:03.222750    7844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 17:21:03.239950    7844 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e2e4b78005e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:31:ca:69:0d} reservation:<nil>}
I0120 17:21:03.240246    7844 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cd3230}
I0120 17:21:03.240269    7844 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0120 17:21:03.240320    7844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0120 17:21:03.312476    7844 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-499291 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-499291 --network=existing-network: (35.159299692s)
helpers_test.go:175: Cleaning up "existing-network-499291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-499291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-499291: (2.055410143s)
I0120 17:21:40.544544    7844 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.37s)

                                                
                                    
x
+
TestKicCustomSubnet (33.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-223872 --subnet=192.168.60.0/24
E0120 17:22:01.699988    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-223872 --subnet=192.168.60.0/24: (31.702389624s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-223872 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-223872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-223872
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-223872: (1.740680064s)
--- PASS: TestKicCustomSubnet (33.46s)

                                                
                                    
x
+
TestKicStaticIP (32.27s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-625137 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-625137 --static-ip=192.168.200.200: (29.955962236s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-625137 ip
helpers_test.go:175: Cleaning up "static-ip-625137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-625137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-625137: (2.151005961s)
--- PASS: TestKicStaticIP (32.27s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-002585 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-002585 --driver=docker  --container-runtime=containerd: (30.120252378s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-005309 --driver=docker  --container-runtime=containerd
E0120 17:23:24.766664    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:23:40.216408    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-005309 --driver=docker  --container-runtime=containerd: (35.893730969s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-002585
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-005309
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-005309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-005309
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-005309: (2.037944524s)
helpers_test.go:175: Cleaning up "first-002585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-002585
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-002585: (2.00317332s)
--- PASS: TestMinikubeProfile (71.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-319725 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-319725 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.306737554s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-319725 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-330383 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-330383 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.371434428s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-330383 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-319725 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-319725 --alsologtostderr -v=5: (1.62238584s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-330383 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-330383
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-330383: (1.205374677s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-330383
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-330383: (6.378695903s)
--- PASS: TestMountStart/serial/RestartStopped (7.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-330383 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-454476 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-454476 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m30.973680737s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-454476 -- rollout status deployment/busybox: (16.481474726s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-f9ljr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-nrkmm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-f9ljr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-nrkmm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-f9ljr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-nrkmm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-f9ljr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-f9ljr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-nrkmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-454476 -- exec busybox-58667487b6-nrkmm -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-454476 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-454476 -v 3 --alsologtostderr: (18.332142171s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-454476 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp testdata/cp-test.txt multinode-454476:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1038997921/001/cp-test_multinode-454476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476:/home/docker/cp-test.txt multinode-454476-m02:/home/docker/cp-test_multinode-454476_multinode-454476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m02 "sudo cat /home/docker/cp-test_multinode-454476_multinode-454476-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476:/home/docker/cp-test.txt multinode-454476-m03:/home/docker/cp-test_multinode-454476_multinode-454476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m03 "sudo cat /home/docker/cp-test_multinode-454476_multinode-454476-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp testdata/cp-test.txt multinode-454476-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1038997921/001/cp-test_multinode-454476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476-m02:/home/docker/cp-test.txt multinode-454476:/home/docker/cp-test_multinode-454476-m02_multinode-454476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476 "sudo cat /home/docker/cp-test_multinode-454476-m02_multinode-454476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476-m02:/home/docker/cp-test.txt multinode-454476-m03:/home/docker/cp-test_multinode-454476-m02_multinode-454476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m03 "sudo cat /home/docker/cp-test_multinode-454476-m02_multinode-454476-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp testdata/cp-test.txt multinode-454476-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1038997921/001/cp-test_multinode-454476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476-m03:/home/docker/cp-test.txt multinode-454476:/home/docker/cp-test_multinode-454476-m03_multinode-454476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476 "sudo cat /home/docker/cp-test_multinode-454476-m03_multinode-454476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 cp multinode-454476-m03:/home/docker/cp-test.txt multinode-454476-m02:/home/docker/cp-test_multinode-454476-m03_multinode-454476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 ssh -n multinode-454476-m02 "sudo cat /home/docker/cp-test_multinode-454476-m03_multinode-454476-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-454476 node stop m03: (1.219864183s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-454476 status: exit status 7 (522.653659ms)

                                                
                                                
-- stdout --
	multinode-454476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-454476-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-454476-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr: exit status 7 (517.921046ms)

                                                
                                                
-- stdout --
	multinode-454476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-454476-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-454476-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:26:45.974886  131255 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:26:45.975056  131255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:26:45.975066  131255 out.go:358] Setting ErrFile to fd 2...
	I0120 17:26:45.975071  131255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:26:45.975328  131255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:26:45.975548  131255 out.go:352] Setting JSON to false
	I0120 17:26:45.975588  131255 mustload.go:65] Loading cluster: multinode-454476
	I0120 17:26:45.975691  131255 notify.go:220] Checking for updates...
	I0120 17:26:45.976028  131255 config.go:182] Loaded profile config "multinode-454476": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:26:45.976052  131255 status.go:174] checking status of multinode-454476 ...
	I0120 17:26:45.976945  131255 cli_runner.go:164] Run: docker container inspect multinode-454476 --format={{.State.Status}}
	I0120 17:26:45.996229  131255 status.go:371] multinode-454476 host status = "Running" (err=<nil>)
	I0120 17:26:45.996256  131255 host.go:66] Checking if "multinode-454476" exists ...
	I0120 17:26:45.996626  131255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-454476
	I0120 17:26:46.028222  131255 host.go:66] Checking if "multinode-454476" exists ...
	I0120 17:26:46.028581  131255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:26:46.028633  131255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-454476
	I0120 17:26:46.049580  131255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/multinode-454476/id_rsa Username:docker}
	I0120 17:26:46.141141  131255 ssh_runner.go:195] Run: systemctl --version
	I0120 17:26:46.145461  131255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 17:26:46.159530  131255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:26:46.218416  131255 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-20 17:26:46.207472882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:26:46.219042  131255 kubeconfig.go:125] found "multinode-454476" server: "https://192.168.67.2:8443"
	I0120 17:26:46.219084  131255 api_server.go:166] Checking apiserver status ...
	I0120 17:26:46.219135  131255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 17:26:46.231160  131255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1487/cgroup
	I0120 17:26:46.241245  131255 api_server.go:182] apiserver freezer: "8:freezer:/docker/ae2ab7ee55798b0ee17d3866e7f83770637b94acaa6399ae6292ab2aa0c811e7/kubepods/burstable/pod28158ebf2a08c60ad743e4d111d20e67/07caab6c36142fcc10c2bc854b85bd858430ae8cf822fa7d1e45ba6579d44c15"
	I0120 17:26:46.241326  131255 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ae2ab7ee55798b0ee17d3866e7f83770637b94acaa6399ae6292ab2aa0c811e7/kubepods/burstable/pod28158ebf2a08c60ad743e4d111d20e67/07caab6c36142fcc10c2bc854b85bd858430ae8cf822fa7d1e45ba6579d44c15/freezer.state
	I0120 17:26:46.250522  131255 api_server.go:204] freezer state: "THAWED"
	I0120 17:26:46.250561  131255 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0120 17:26:46.258949  131255 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0120 17:26:46.258980  131255 status.go:463] multinode-454476 apiserver status = Running (err=<nil>)
	I0120 17:26:46.258990  131255 status.go:176] multinode-454476 status: &{Name:multinode-454476 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:26:46.259007  131255 status.go:174] checking status of multinode-454476-m02 ...
	I0120 17:26:46.259335  131255 cli_runner.go:164] Run: docker container inspect multinode-454476-m02 --format={{.State.Status}}
	I0120 17:26:46.277416  131255 status.go:371] multinode-454476-m02 host status = "Running" (err=<nil>)
	I0120 17:26:46.277441  131255 host.go:66] Checking if "multinode-454476-m02" exists ...
	I0120 17:26:46.277749  131255 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-454476-m02
	I0120 17:26:46.296439  131255 host.go:66] Checking if "multinode-454476-m02" exists ...
	I0120 17:26:46.296742  131255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 17:26:46.296793  131255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-454476-m02
	I0120 17:26:46.316515  131255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20109-2518/.minikube/machines/multinode-454476-m02/id_rsa Username:docker}
	I0120 17:26:46.404374  131255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 17:26:46.418891  131255 status.go:176] multinode-454476-m02 status: &{Name:multinode-454476-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:26:46.418929  131255 status.go:174] checking status of multinode-454476-m03 ...
	I0120 17:26:46.419276  131255 cli_runner.go:164] Run: docker container inspect multinode-454476-m03 --format={{.State.Status}}
	I0120 17:26:46.437824  131255 status.go:371] multinode-454476-m03 host status = "Stopped" (err=<nil>)
	I0120 17:26:46.437850  131255 status.go:384] host is not running, skipping remaining checks
	I0120 17:26:46.437857  131255 status.go:176] multinode-454476-m03 status: &{Name:multinode-454476-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-454476 node start m03 -v=7 --alsologtostderr: (9.534310649s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-454476
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-454476
E0120 17:27:01.700168    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-454476: (24.908817212s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-454476 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-454476 --wait=true -v=8 --alsologtostderr: (59.486647848s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-454476
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-454476 node delete m03: (4.590854294s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 stop
E0120 17:28:40.210333    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-454476 stop: (23.674673154s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-454476 status: exit status 7 (94.296731ms)

                                                
                                                
-- stdout --
	multinode-454476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-454476-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr: exit status 7 (92.496529ms)

                                                
                                                
-- stdout --
	multinode-454476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-454476-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:28:50.409881  139282 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:28:50.410023  139282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:28:50.410035  139282 out.go:358] Setting ErrFile to fd 2...
	I0120 17:28:50.410053  139282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:28:50.410330  139282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:28:50.410508  139282 out.go:352] Setting JSON to false
	I0120 17:28:50.410548  139282 mustload.go:65] Loading cluster: multinode-454476
	I0120 17:28:50.410644  139282 notify.go:220] Checking for updates...
	I0120 17:28:50.411053  139282 config.go:182] Loaded profile config "multinode-454476": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:28:50.411081  139282 status.go:174] checking status of multinode-454476 ...
	I0120 17:28:50.411937  139282 cli_runner.go:164] Run: docker container inspect multinode-454476 --format={{.State.Status}}
	I0120 17:28:50.430255  139282 status.go:371] multinode-454476 host status = "Stopped" (err=<nil>)
	I0120 17:28:50.430280  139282 status.go:384] host is not running, skipping remaining checks
	I0120 17:28:50.430287  139282 status.go:176] multinode-454476 status: &{Name:multinode-454476 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 17:28:50.430320  139282 status.go:174] checking status of multinode-454476-m02 ...
	I0120 17:28:50.430642  139282 cli_runner.go:164] Run: docker container inspect multinode-454476-m02 --format={{.State.Status}}
	I0120 17:28:50.452970  139282 status.go:371] multinode-454476-m02 host status = "Stopped" (err=<nil>)
	I0120 17:28:50.452988  139282 status.go:384] host is not running, skipping remaining checks
	I0120 17:28:50.452994  139282 status.go:176] multinode-454476-m02 status: &{Name:multinode-454476-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-454476 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-454476 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (56.254843384s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-454476 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-454476
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-454476-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-454476-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.263002ms)

                                                
                                                
-- stdout --
	* [multinode-454476-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-454476-m02' is duplicated with machine name 'multinode-454476-m02' in profile 'multinode-454476'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-454476-m03 --driver=docker  --container-runtime=containerd
E0120 17:30:03.277096    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-454476-m03 --driver=docker  --container-runtime=containerd: (33.203547031s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-454476
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-454476: exit status 80 (352.248945ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-454476 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-454476-m03 already exists in multinode-454476-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-454476-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-454476-m03: (1.964672092s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.69s)

                                                
                                    
x
+
TestPreload (115.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-257636 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-257636 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.458294067s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-257636 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-257636 image pull gcr.io/k8s-minikube/busybox: (2.076300087s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-257636
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-257636: (12.010892983s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-257636 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0120 17:32:01.700047    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-257636 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.963926744s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-257636 image list
helpers_test.go:175: Cleaning up "test-preload-257636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-257636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-257636: (2.570537804s)
--- PASS: TestPreload (115.37s)

                                                
                                    
x
+
TestScheduledStopUnix (106.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-223488 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-223488 --memory=2048 --driver=docker  --container-runtime=containerd: (30.510004846s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-223488 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-223488 -n scheduled-stop-223488
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-223488 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 17:32:53.439612    7844 retry.go:31] will retry after 90.559µs: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.439998    7844 retry.go:31] will retry after 171.17µs: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.441141    7844 retry.go:31] will retry after 313.833µs: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.442266    7844 retry.go:31] will retry after 314.813µs: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.443416    7844 retry.go:31] will retry after 468.629µs: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.444593    7844 retry.go:31] will retry after 523.55µs: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.445671    7844 retry.go:31] will retry after 1.551406ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.447812    7844 retry.go:31] will retry after 1.582031ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.449994    7844 retry.go:31] will retry after 1.683987ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.452121    7844 retry.go:31] will retry after 3.346947ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.456312    7844 retry.go:31] will retry after 5.545122ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.462534    7844 retry.go:31] will retry after 9.447858ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.472764    7844 retry.go:31] will retry after 8.900057ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.481956    7844 retry.go:31] will retry after 11.087977ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.494170    7844 retry.go:31] will retry after 20.471912ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
I0120 17:32:53.515414    7844 retry.go:31] will retry after 41.643011ms: open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/scheduled-stop-223488/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-223488 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-223488 -n scheduled-stop-223488
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-223488
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-223488 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0120 17:33:40.214475    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-223488
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-223488: exit status 7 (72.780904ms)

                                                
                                                
-- stdout --
	scheduled-stop-223488
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-223488 -n scheduled-stop-223488
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-223488 -n scheduled-stop-223488: exit status 7 (74.762084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-223488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-223488
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-223488: (4.386495099s)
--- PASS: TestScheduledStopUnix (106.46s)

                                                
                                    
x
+
TestInsufficientStorage (13.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-704953 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-704953 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.78178507s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2b2f22bf-f4a8-4db2-bdd3-ca5581a1828e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-704953] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0dfd58bb-5bb0-42c2-9631-afe404689528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"e0e55f48-57da-4ad2-89be-8b0ae4160f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b15240c1-b367-4627-a1dd-17403258c855","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig"}}
	{"specversion":"1.0","id":"c48fa669-4e8f-4e44-a3cd-d40cc99632f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube"}}
	{"specversion":"1.0","id":"21950351-4cdf-4a07-aea6-1b9cd758a78c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"50099896-871f-4a5a-8027-6afc5aca164b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"92b79c0e-ea07-4a16-9c00-7b901fc44bce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2cd7bebe-286c-4a6b-a2e0-76e958ba789e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c0a6f9cd-963d-4bf1-ab9d-a5e355a67045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c81c1ef-56fb-40e3-9160-56774203d409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"023e1873-85b1-482b-a2ee-c1e1ff904854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-704953\" primary control-plane node in \"insufficient-storage-704953\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7ec628a-8f7b-4e4b-8ef0-79851c79a952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8591d344-e1be-4f74-bdfb-98f38ddef056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcd50b85-7067-4902-987b-2bbf683a4809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-704953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-704953 --output=json --layout=cluster: exit status 7 (266.55964ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-704953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-704953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 17:34:19.916834  157939 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-704953" does not appear in /home/jenkins/minikube-integration/20109-2518/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-704953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-704953 --output=json --layout=cluster: exit status 7 (300.515705ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-704953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-704953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 17:34:20.218106  158001 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-704953" does not appear in /home/jenkins/minikube-integration/20109-2518/kubeconfig
	E0120 17:34:20.228093  158001 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/insufficient-storage-704953/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-704953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-704953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-704953: (1.897509479s)
--- PASS: TestInsufficientStorage (13.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3177011102 start -p running-upgrade-480245 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3177011102 start -p running-upgrade-480245 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.20883299s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-480245 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0120 17:40:04.768883    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-480245 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.796373616s)
helpers_test.go:175: Cleaning up "running-upgrade-480245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-480245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-480245: (2.521277127s)
--- PASS: TestRunningBinaryUpgrade (88.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (345.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.333833879s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-538448
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-538448: (2.273761326s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-538448 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-538448 status --format={{.Host}}: exit status 7 (139.481277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0120 17:37:01.705239    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m34.614787469s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-538448 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (163.614456ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-538448] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-538448
	    minikube start -p kubernetes-upgrade-538448 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5384482 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-538448 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-538448 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.922276864s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-538448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-538448
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-538448: (2.605749131s)
--- PASS: TestKubernetesUpgrade (345.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2442762878 start -p missing-upgrade-969840 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2442762878 start -p missing-upgrade-969840 --memory=2200 --driver=docker  --container-runtime=containerd: (1m28.422236882s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-969840
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-969840: (1.039155271s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-969840
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-969840 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-969840 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.317063037s)
helpers_test.go:175: Cleaning up "missing-upgrade-969840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-969840
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-969840: (2.862678229s)
--- PASS: TestMissingContainerUpgrade (164.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-025019 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-025019 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (92.465879ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-025019] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-025019 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-025019 --driver=docker  --container-runtime=containerd: (40.895343581s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-025019 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-025019 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-025019 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.228685905s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-025019 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-025019 status -o json: exit status 2 (302.204555ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-025019","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-025019
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-025019: (2.403695469s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-025019 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-025019 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.725766503s)
--- PASS: TestNoKubernetes/serial/Start (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-025019 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-025019 "sudo systemctl is-active --quiet service kubelet": exit status 1 (365.963173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-025019
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-025019: (1.281679039s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-025019 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-025019 --driver=docker  --container-runtime=containerd: (8.245385407s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-025019 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-025019 "sudo systemctl is-active --quiet service kubelet": exit status 1 (427.816655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2146209806 start -p stopped-upgrade-077009 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2146209806 start -p stopped-upgrade-077009 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.045619186s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2146209806 -p stopped-upgrade-077009 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2146209806 -p stopped-upgrade-077009 stop: (19.885126264s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-077009 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0120 17:38:40.210047    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-077009 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.16050283s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-077009
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-077009: (1.047601128s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestPause/serial/Start (49.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-521243 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-521243 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (49.242969356s)
--- PASS: TestPause/serial/Start (49.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-521243 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-521243 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.216753171s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.24s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-521243 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-521243 --alsologtostderr -v=5: (1.060353829s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-521243 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-521243 --output=json --layout=cluster: exit status 2 (580.025325ms)

                                                
                                                
-- stdout --
	{"Name":"pause-521243","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-521243","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.58s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.13s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-521243 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-521243 --alsologtostderr -v=5: (1.129473217s)
--- PASS: TestPause/serial/Unpause (1.13s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-521243 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-521243 --alsologtostderr -v=5: (1.211638674s)
--- PASS: TestPause/serial/PauseAgain (1.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.15s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-521243 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-521243 --alsologtostderr -v=5: (3.147191765s)
--- PASS: TestPause/serial/DeletePaused (3.15s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-521243
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-521243: exit status 1 (21.484905ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-521243: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-150493 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-150493 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (260.315016ms)

                                                
                                                
-- stdout --
	* [false-150493] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 17:41:34.091318  198415 out.go:345] Setting OutFile to fd 1 ...
	I0120 17:41:34.091553  198415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:41:34.091581  198415 out.go:358] Setting ErrFile to fd 2...
	I0120 17:41:34.091602  198415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 17:41:34.091891  198415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2518/.minikube/bin
	I0120 17:41:34.092339  198415 out.go:352] Setting JSON to false
	I0120 17:41:34.093334  198415 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5038,"bootTime":1737389856,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 17:41:34.093485  198415 start.go:139] virtualization:  
	I0120 17:41:34.100159  198415 out.go:177] * [false-150493] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 17:41:34.103572  198415 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 17:41:34.103738  198415 notify.go:220] Checking for updates...
	I0120 17:41:34.109891  198415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 17:41:34.112815  198415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-2518/kubeconfig
	I0120 17:41:34.115863  198415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2518/.minikube
	I0120 17:41:34.118777  198415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 17:41:34.121719  198415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 17:41:34.125208  198415 config.go:182] Loaded profile config "force-systemd-flag-832515": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 17:41:34.125313  198415 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 17:41:34.189736  198415 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 17:41:34.189852  198415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 17:41:34.262349  198415 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:55 SystemTime:2025-01-20 17:41:34.247711606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 17:41:34.262469  198415 docker.go:318] overlay module found
	I0120 17:41:34.265988  198415 out.go:177] * Using the docker driver based on user configuration
	I0120 17:41:34.269409  198415 start.go:297] selected driver: docker
	I0120 17:41:34.269437  198415 start.go:901] validating driver "docker" against <nil>
	I0120 17:41:34.269452  198415 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 17:41:34.273516  198415 out.go:201] 
	W0120 17:41:34.277147  198415 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0120 17:41:34.280403  198415 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-150493 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-150493" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-150493

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-150493"

                                                
                                                
----------------------- debugLogs end: false-150493 [took: 4.488885214s] --------------------------------
helpers_test.go:175: Cleaning up "false-150493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-150493
--- PASS: TestNetworkPlugins/group/false (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0120 17:43:40.215944    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-145659 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m35.190384055s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-145659 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5c9fa9c-ee24-4a39-8187-1d48bb751974] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d5c9fa9c-ee24-4a39-8187-1d48bb751974] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003726185s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-145659 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-145659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-145659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.61386202s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-145659 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-145659 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-145659 --alsologtostderr -v=3: (12.335391564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-698725 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-698725 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m23.970321498s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145659 -n old-k8s-version-145659
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-145659 -n old-k8s-version-145659: exit status 7 (84.647875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-145659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-698725 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2e40a75-9c1b-4b30-8d8f-159f45a91d67] Pending
helpers_test.go:344: "busybox" [a2e40a75-9c1b-4b30-8d8f-159f45a91d67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2e40a75-9c1b-4b30-8d8f-159f45a91d67] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003612182s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-698725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-698725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-698725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.146464304s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-698725 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-698725 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-698725 --alsologtostderr -v=3: (11.995002304s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-698725 -n embed-certs-698725
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-698725 -n embed-certs-698725: exit status 7 (100.758378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-698725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-698725 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 17:48:40.210072    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:52:01.700103    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-698725 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m28.161961594s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-698725 -n embed-certs-698725
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jkbqb" [686109d6-f51b-4b71-bb2f-bc53c78ce079] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003631124s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jkbqb" [686109d6-f51b-4b71-bb2f-bc53c78ce079] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004162784s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-698725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-698725 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-698725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-698725 -n embed-certs-698725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-698725 -n embed-certs-698725: exit status 2 (336.761913ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-698725 -n embed-certs-698725
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-698725 -n embed-certs-698725: exit status 2 (323.775585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-698725 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-698725 -n embed-certs-698725
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-698725 -n embed-certs-698725
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-764965 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-764965 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m16.044550373s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-httgs" [0275430f-ee74-404a-9355-e27e4d01d38b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005870775s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-httgs" [0275430f-ee74-404a-9355-e27e4d01d38b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005590147s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-145659 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-145659 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-145659 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145659 -n old-k8s-version-145659
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145659 -n old-k8s-version-145659: exit status 2 (401.114579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145659 -n old-k8s-version-145659
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145659 -n old-k8s-version-145659: exit status 2 (410.171822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-145659 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-145659 --alsologtostderr -v=1: (1.043187808s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-145659 -n old-k8s-version-145659
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-145659 -n old-k8s-version-145659
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-049898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 17:53:40.209651    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-049898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (56.361196069s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-764965 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [107ed8bc-5367-44e1-bf36-ffc02e8a9101] Pending
helpers_test.go:344: "busybox" [107ed8bc-5367-44e1-bf36-ffc02e8a9101] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [107ed8bc-5367-44e1-bf36-ffc02e8a9101] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004909733s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-764965 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-049898 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [788a5236-dbcf-4e8a-84e1-37db2d2d0024] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [788a5236-dbcf-4e8a-84e1-37db2d2d0024] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004276644s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-049898 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-764965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-764965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035517738s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-764965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-764965 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-764965 --alsologtostderr -v=3: (12.168542261s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-049898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-049898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097924048s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-049898 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-049898 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-049898 --alsologtostderr -v=3: (12.025352563s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-764965 -n no-preload-764965
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-764965 -n no-preload-764965: exit status 7 (74.837385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-764965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-764965 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-764965 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (5m0.596441222s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-764965 -n no-preload-764965
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898: exit status 7 (73.914814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-049898 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-049898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 17:55:41.165649    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.172134    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.183474    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.204875    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.246360    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.327808    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.489297    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:41.811148    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:42.453121    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:43.734803    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:46.296296    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:55:51.418181    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:56:01.660180    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:56:22.141536    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:56:44.770985    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:57:01.699979    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:57:03.102835    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:58:25.024179    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
E0120 17:58:40.210191    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-049898 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (5m2.774329971s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wrh9c" [20e05935-81b7-4138-8038-d385cec28717] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003271282s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-29rkf" [29ff5592-9541-48b6-a283-e4a3938bbb91] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004397772s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wrh9c" [20e05935-81b7-4138-8038-d385cec28717] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003449846s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-764965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-29rkf" [29ff5592-9541-48b6-a283-e4a3938bbb91] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00443413s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-049898 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-764965 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-764965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-764965 -n no-preload-764965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-764965 -n no-preload-764965: exit status 2 (331.375126ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-764965 -n no-preload-764965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-764965 -n no-preload-764965: exit status 2 (324.534755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-764965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-764965 -n no-preload-764965
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-764965 -n no-preload-764965
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-049898 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-049898 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-049898 --alsologtostderr -v=1: (1.105008486s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898: exit status 2 (414.11495ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898: exit status 2 (408.718704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-049898 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-049898 -n default-k8s-diff-port-049898
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-529324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-529324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (44.320427946s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m0.894272207s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-529324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-529324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.555446866s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-529324 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-529324 --alsologtostderr -v=3: (1.289307342s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-529324 -n newest-cni-529324
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-529324 -n newest-cni-529324: exit status 7 (124.968452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-529324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-529324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-529324 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (16.343809885s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-529324 -n newest-cni-529324
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-529324 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-529324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-529324 -n newest-cni-529324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-529324 -n newest-cni-529324: exit status 2 (335.480843ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-529324 -n newest-cni-529324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-529324 -n newest-cni-529324: exit status 2 (332.525707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-529324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-529324 -n newest-cni-529324
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-529324 -n newest-cni-529324
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-150493 "pgrep -a kubelet"
I0120 18:00:35.022098    7844 config.go:182] Loaded profile config "auto-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-150493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qm5vl" [0134b4ff-474b-4230-91b9-ce4a6cbd65fd] Pending
helpers_test.go:344: "netcat-5d86dc444-qm5vl" [0134b4ff-474b-4230-91b9-ce4a6cbd65fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004554692s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0120 18:00:41.164985    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/old-k8s-version-145659/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m27.890152984s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0120 18:02:01.700312    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/addons-168570/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m2.223656665s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xhw6n" [66871eda-9389-4c53-9165-fe393a533fbe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003744342s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-150493 "pgrep -a kubelet"
I0120 18:02:12.349796    7844 config.go:182] Loaded profile config "kindnet-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-150493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w52rn" [75b645d8-de47-451c-8c1b-87af26db8e10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-w52rn" [75b645d8-de47-451c-8c1b-87af26db8e10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005276231s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ppw6j" [dbc74efd-1d24-44e6-9439-3348641d4518] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004885902s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-150493 "pgrep -a kubelet"
I0120 18:02:23.126709    7844 config.go:182] Loaded profile config "calico-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-150493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wcprg" [2c00b490-cbd1-4ab2-a7b2-f7c15476f834] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wcprg" [2c00b490-cbd1-4ab2-a7b2-f7c15476f834] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004459662s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.955309934s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0120 18:03:23.281483    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (51.223226778s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-150493 "pgrep -a kubelet"
I0120 18:03:39.954075    7844 config.go:182] Loaded profile config "custom-flannel-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-150493 replace --force -f testdata/netcat-deployment.yaml
E0120 18:03:40.209479    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/functional-659288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hcmqg" [38e8d3cb-0619-45ff-80bb-985b8d76a9f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hcmqg" [38e8d3cb-0619-45ff-80bb-985b8d76a9f4] Running
E0120 18:03:46.144000    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.150333    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.161690    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.183035    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.224421    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.306142    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.468105    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:46.789824    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:47.431524    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:48.713155    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.560535    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.566851    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.578208    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.599606    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.640977    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.722508    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:49.884075    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:50.205958    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:50.847509    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:51.275320    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004358358s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-150493 "pgrep -a kubelet"
I0120 18:03:52.629013    7844 config.go:182] Loaded profile config "enable-default-cni-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-150493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cl84x" [e9a12e3d-daba-42ec-9e21-7c74ead44627] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 18:03:54.691069    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:03:56.397169    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-cl84x" [e9a12e3d-daba-42ec-9e21-7c74ead44627] Running
E0120 18:03:59.813078    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.011229666s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.312197427s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0120 18:04:27.121048    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:04:30.536951    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:05:08.083314    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/no-preload-764965/client.crt: no such file or directory" logger="UnhandledError"
E0120 18:05:11.498494    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/default-k8s-diff-port-049898/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-150493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m22.069261342s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9qhsg" [09bdd504-04c5-40d3-abaa-1810c2176edf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004552811s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-150493 "pgrep -a kubelet"
I0120 18:05:20.363310    7844 config.go:182] Loaded profile config "flannel-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-150493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h6qkw" [f80b9ad3-856c-4277-bbb7-2b0ab6ca3cd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h6qkw" [f80b9ad3-856c-4277-bbb7-2b0ab6ca3cd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003738458s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-150493 "pgrep -a kubelet"
I0120 18:05:49.144961    7844 config.go:182] Loaded profile config "bridge-150493": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-150493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-flzg4" [cd01e034-f295-4f1f-8c48-3025a225a40c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-flzg4" [cd01e034-f295-4f1f-8c48-3025a225a40c] Running
E0120 18:05:55.942349    7844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-2518/.minikube/profiles/auto-150493/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004197559s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-150493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-150493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-107762 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-107762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-107762
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-645795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-645795
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-150493 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-150493" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-150493

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-150493"

                                                
                                                
----------------------- debugLogs end: kubenet-150493 [took: 5.536681383s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-150493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-150493
--- SKIP: TestNetworkPlugins/group/kubenet (5.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-150493 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-150493" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-150493

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-150493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-150493"

                                                
                                                
----------------------- debugLogs end: cilium-150493 [took: 5.043988663s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-150493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-150493
--- SKIP: TestNetworkPlugins/group/cilium (5.27s)

                                                
                                    
Copied to clipboard