Test Report: Docker_Linux_containerd_arm64 20242

                    
                      454e3a8af9229d80194750b761a4b9142724e045:2025-01-20:37993
                    
                

Test fail (1/330)

Order failed test Duration
302 TestStartStop/group/old-k8s-version/serial/SecondStart 372.97
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m9.27692981s)

                                                
                                                
-- stdout --
	* [old-k8s-version-140749] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-140749" primary control-plane node in "old-k8s-version-140749" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-140749" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-140749 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:26:56.833498  950903 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:26:56.833721  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:26:56.833734  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:26:56.833740  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:26:56.833986  950903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 14:26:56.834361  950903 out.go:352] Setting JSON to false
	I0120 14:26:56.835390  950903 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14962,"bootTime":1737368255,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 14:26:56.835465  950903 start.go:139] virtualization:  
	I0120 14:26:56.840767  950903 out.go:177] * [old-k8s-version-140749] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 14:26:56.844020  950903 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:26:56.844069  950903 notify.go:220] Checking for updates...
	I0120 14:26:56.850532  950903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:26:56.853411  950903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:26:56.856208  950903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 14:26:56.859050  950903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 14:26:56.861896  950903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:26:56.865346  950903 config.go:182] Loaded profile config "old-k8s-version-140749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 14:26:56.868998  950903 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 14:26:56.871948  950903 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:26:56.916245  950903 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 14:26:56.916380  950903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 14:26:57.002870  950903 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 14:26:56.990165693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 14:26:57.002985  950903 docker.go:318] overlay module found
	I0120 14:26:57.006925  950903 out.go:177] * Using the docker driver based on existing profile
	I0120 14:26:57.009867  950903 start.go:297] selected driver: docker
	I0120 14:26:57.009898  950903 start.go:901] validating driver "docker" against &{Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:26:57.010024  950903 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:26:57.010767  950903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 14:26:57.089550  950903 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 14:26:57.078888764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 14:26:57.090055  950903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:26:57.090088  950903 cni.go:84] Creating CNI manager for ""
	I0120 14:26:57.090291  950903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 14:26:57.090365  950903 start.go:340] cluster config:
	{Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:26:57.095738  950903 out.go:177] * Starting "old-k8s-version-140749" primary control-plane node in "old-k8s-version-140749" cluster
	I0120 14:26:57.098754  950903 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 14:26:57.101886  950903 out.go:177] * Pulling base image v0.0.46 ...
	I0120 14:26:57.104832  950903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 14:26:57.104905  950903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 14:26:57.104917  950903 cache.go:56] Caching tarball of preloaded images
	I0120 14:26:57.105028  950903 preload.go:172] Found /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0120 14:26:57.105044  950903 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0120 14:26:57.105170  950903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/config.json ...
	I0120 14:26:57.105443  950903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 14:26:57.139935  950903 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0120 14:26:57.139959  950903 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0120 14:26:57.139973  950903 cache.go:227] Successfully downloaded all kic artifacts
	I0120 14:26:57.140005  950903 start.go:360] acquireMachinesLock for old-k8s-version-140749: {Name:mk3b1de2e93537f0dae30829ba65f2718277905f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:26:57.140069  950903 start.go:364] duration metric: took 37.9µs to acquireMachinesLock for "old-k8s-version-140749"
	I0120 14:26:57.140093  950903 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:26:57.140099  950903 fix.go:54] fixHost starting: 
	I0120 14:26:57.140372  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:26:57.161116  950903 fix.go:112] recreateIfNeeded on old-k8s-version-140749: state=Stopped err=<nil>
	W0120 14:26:57.161150  950903 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:26:57.164163  950903 out.go:177] * Restarting existing docker container for "old-k8s-version-140749" ...
	I0120 14:26:57.167030  950903 cli_runner.go:164] Run: docker start old-k8s-version-140749
	I0120 14:26:57.527115  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:26:57.563993  950903 kic.go:430] container "old-k8s-version-140749" state is running.
	I0120 14:26:57.564902  950903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140749
	I0120 14:26:57.601328  950903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/config.json ...
	I0120 14:26:57.601557  950903 machine.go:93] provisionDockerMachine start ...
	I0120 14:26:57.601678  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:26:57.631658  950903 main.go:141] libmachine: Using SSH client type: native
	I0120 14:26:57.631924  950903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I0120 14:26:57.631934  950903 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:26:57.632542  950903 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50272->127.0.0.1:33829: read: connection reset by peer
	I0120 14:27:00.773080  950903 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-140749
	
	I0120 14:27:00.773104  950903 ubuntu.go:169] provisioning hostname "old-k8s-version-140749"
	I0120 14:27:00.773179  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:00.803054  950903 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:00.803310  950903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I0120 14:27:00.803324  950903 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-140749 && echo "old-k8s-version-140749" | sudo tee /etc/hostname
	I0120 14:27:00.962725  950903 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-140749
	
	I0120 14:27:00.962810  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:00.992437  950903 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:00.992698  950903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I0120 14:27:00.992723  950903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-140749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-140749/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-140749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:27:01.135785  950903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:27:01.135821  950903 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20242-741865/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-741865/.minikube}
	I0120 14:27:01.135872  950903 ubuntu.go:177] setting up certificates
	I0120 14:27:01.135902  950903 provision.go:84] configureAuth start
	I0120 14:27:01.136000  950903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140749
	I0120 14:27:01.166093  950903 provision.go:143] copyHostCerts
	I0120 14:27:01.166158  950903 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem, removing ...
	I0120 14:27:01.166167  950903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem
	I0120 14:27:01.166249  950903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem (1078 bytes)
	I0120 14:27:01.166370  950903 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem, removing ...
	I0120 14:27:01.166376  950903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem
	I0120 14:27:01.166403  950903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem (1123 bytes)
	I0120 14:27:01.166468  950903 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem, removing ...
	I0120 14:27:01.166473  950903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem
	I0120 14:27:01.166497  950903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem (1679 bytes)
	I0120 14:27:01.166560  950903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-140749 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-140749]
	I0120 14:27:01.498196  950903 provision.go:177] copyRemoteCerts
	I0120 14:27:01.498281  950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:27:01.498334  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:01.516805  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:01.607677  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 14:27:01.635820  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 14:27:01.661396  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 14:27:01.686435  950903 provision.go:87] duration metric: took 550.512166ms to configureAuth
	I0120 14:27:01.686511  950903 ubuntu.go:193] setting minikube options for container-runtime
	I0120 14:27:01.686749  950903 config.go:182] Loaded profile config "old-k8s-version-140749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 14:27:01.686767  950903 machine.go:96] duration metric: took 4.08520262s to provisionDockerMachine
	I0120 14:27:01.686778  950903 start.go:293] postStartSetup for "old-k8s-version-140749" (driver="docker")
	I0120 14:27:01.686803  950903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:27:01.686865  950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:27:01.686923  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:01.705240  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:01.795317  950903 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:27:01.799153  950903 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 14:27:01.799212  950903 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 14:27:01.799225  950903 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 14:27:01.799234  950903 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 14:27:01.799245  950903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/addons for local assets ...
	I0120 14:27:01.799322  950903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/files for local assets ...
	I0120 14:27:01.799418  950903 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem -> 7472562.pem in /etc/ssl/certs
	I0120 14:27:01.799544  950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:27:01.808961  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /etc/ssl/certs/7472562.pem (1708 bytes)
	I0120 14:27:01.834385  950903 start.go:296] duration metric: took 147.590756ms for postStartSetup
	I0120 14:27:01.834470  950903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 14:27:01.834519  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:01.852203  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:01.940455  950903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 14:27:01.947983  950903 fix.go:56] duration metric: took 4.807876879s for fixHost
	I0120 14:27:01.948062  950903 start.go:83] releasing machines lock for "old-k8s-version-140749", held for 4.807978458s
	I0120 14:27:01.948172  950903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-140749
	I0120 14:27:01.981284  950903 ssh_runner.go:195] Run: cat /version.json
	I0120 14:27:01.981342  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:01.981646  950903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:27:01.981720  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:02.015999  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:02.019799  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:02.109329  950903 ssh_runner.go:195] Run: systemctl --version
	I0120 14:27:02.262571  950903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 14:27:02.268658  950903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0120 14:27:02.309921  950903 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0120 14:27:02.310000  950903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:27:02.322355  950903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 14:27:02.322379  950903 start.go:495] detecting cgroup driver to use...
	I0120 14:27:02.322412  950903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 14:27:02.322468  950903 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 14:27:02.356989  950903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 14:27:02.372020  950903 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:27:02.372084  950903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:27:02.390728  950903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:27:02.404593  950903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:27:02.509473  950903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:27:02.631861  950903 docker.go:233] disabling docker service ...
	I0120 14:27:02.631932  950903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:27:02.648831  950903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:27:02.664093  950903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:27:02.797207  950903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:27:02.917442  950903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:27:02.938070  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:27:02.956438  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0120 14:27:02.967629  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 14:27:02.978634  950903 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 14:27:02.978703  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 14:27:02.990131  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:27:03.003766  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 14:27:03.020025  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:27:03.032399  950903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:27:03.043485  950903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 14:27:03.060540  950903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:27:03.072981  950903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:27:03.083153  950903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:27:03.201649  950903 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 14:27:03.464533  950903 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 14:27:03.464613  950903 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 14:27:03.472726  950903 start.go:563] Will wait 60s for crictl version
	I0120 14:27:03.472798  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:27:03.476778  950903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:27:03.549634  950903 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0120 14:27:03.549724  950903 ssh_runner.go:195] Run: containerd --version
	I0120 14:27:03.579985  950903 ssh_runner.go:195] Run: containerd --version
	I0120 14:27:03.614734  950903 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0120 14:27:03.618334  950903 cli_runner.go:164] Run: docker network inspect old-k8s-version-140749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 14:27:03.637347  950903 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0120 14:27:03.641242  950903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:27:03.660806  950903 kubeadm.go:883] updating cluster {Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:27:03.660939  950903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 14:27:03.661016  950903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:27:03.740304  950903 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:27:03.740326  950903 containerd.go:534] Images already preloaded, skipping extraction
	I0120 14:27:03.740386  950903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:27:03.814311  950903 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:27:03.814335  950903 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:27:03.814350  950903 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0120 14:27:03.814512  950903 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-140749 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:27:03.814586  950903 ssh_runner.go:195] Run: sudo crictl info
	I0120 14:27:03.892671  950903 cni.go:84] Creating CNI manager for ""
	I0120 14:27:03.892695  950903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 14:27:03.892704  950903 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:27:03.892726  950903 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-140749 NodeName:old-k8s-version-140749 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 14:27:03.892847  950903 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-140749"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:27:03.892912  950903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 14:27:03.911458  950903 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:27:03.911526  950903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:27:03.928903  950903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0120 14:27:03.965397  950903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:27:04.023109  950903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0120 14:27:04.050986  950903 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0120 14:27:04.054834  950903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:27:04.083916  950903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:27:04.239100  950903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:27:04.275100  950903 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749 for IP: 192.168.85.2
	I0120 14:27:04.275119  950903 certs.go:194] generating shared ca certs ...
	I0120 14:27:04.275142  950903 certs.go:226] acquiring lock for ca certs: {Name:mka7a6ccd7d8b5f47789c70c8e6dc479acdcdb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:04.275335  950903 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key
	I0120 14:27:04.275596  950903 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key
	I0120 14:27:04.275610  950903 certs.go:256] generating profile certs ...
	I0120 14:27:04.275718  950903 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.key
	I0120 14:27:04.275792  950903 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/apiserver.key.f3a616b9
	I0120 14:27:04.276033  950903 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/proxy-client.key
	I0120 14:27:04.276331  950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem (1338 bytes)
	W0120 14:27:04.276431  950903 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256_empty.pem, impossibly tiny 0 bytes
	I0120 14:27:04.276460  950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 14:27:04.276538  950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem (1078 bytes)
	I0120 14:27:04.276583  950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:27:04.276610  950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem (1679 bytes)
	I0120 14:27:04.276665  950903 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem (1708 bytes)
	I0120 14:27:04.277436  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:27:04.340286  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 14:27:04.379786  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:27:04.414376  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 14:27:04.450238  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 14:27:04.489721  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:27:04.517446  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:27:04.542907  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:27:04.568353  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:27:04.635339  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem --> /usr/share/ca-certificates/747256.pem (1338 bytes)
	I0120 14:27:04.706670  950903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /usr/share/ca-certificates/7472562.pem (1708 bytes)
	I0120 14:27:04.755765  950903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:27:04.797728  950903 ssh_runner.go:195] Run: openssl version
	I0120 14:27:04.806204  950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:27:04.819280  950903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:27:04.823993  950903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:27:04.824269  950903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:27:04.840819  950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:27:04.860113  950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/747256.pem && ln -fs /usr/share/ca-certificates/747256.pem /etc/ssl/certs/747256.pem"
	I0120 14:27:04.874755  950903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/747256.pem
	I0120 14:27:04.878988  950903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 13:48 /usr/share/ca-certificates/747256.pem
	I0120 14:27:04.879061  950903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/747256.pem
	I0120 14:27:04.898042  950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/747256.pem /etc/ssl/certs/51391683.0"
	I0120 14:27:04.939761  950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7472562.pem && ln -fs /usr/share/ca-certificates/7472562.pem /etc/ssl/certs/7472562.pem"
	I0120 14:27:04.974905  950903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7472562.pem
	I0120 14:27:04.989083  950903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 13:48 /usr/share/ca-certificates/7472562.pem
	I0120 14:27:04.989166  950903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7472562.pem
	I0120 14:27:04.996595  950903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7472562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:27:05.013307  950903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:27:05.023541  950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:27:05.037781  950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:27:05.053029  950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:27:05.072639  950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:27:05.080322  950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:27:05.090348  950903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:27:05.105081  950903 kubeadm.go:392] StartCluster: {Name:old-k8s-version-140749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-140749 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:27:05.105179  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 14:27:05.105275  950903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:27:05.177517  950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:27:05.177549  950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:27:05.177556  950903 cri.go:89] found id: "f927d850c11b6c45d5cf960f5cc2e994752352515a4ba8707751f12c497ceaad"
	I0120 14:27:05.177559  950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:27:05.177562  950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:27:05.177566  950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:27:05.177569  950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:27:05.177572  950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:27:05.177576  950903 cri.go:89] found id: ""
	I0120 14:27:05.177659  950903 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 14:27:05.197340  950903 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T14:27:05Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 14:27:05.197421  950903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:27:05.207511  950903 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:27:05.207532  950903 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:27:05.207586  950903 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:27:05.223964  950903 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:27:05.224406  950903 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-140749" does not appear in /home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:27:05.224517  950903 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-741865/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-140749" cluster setting kubeconfig missing "old-k8s-version-140749" context setting]
	I0120 14:27:05.224810  950903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:05.226106  950903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:27:05.237384  950903 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0120 14:27:05.237426  950903 kubeadm.go:597] duration metric: took 29.88751ms to restartPrimaryControlPlane
	I0120 14:27:05.237440  950903 kubeadm.go:394] duration metric: took 132.369088ms to StartCluster
	I0120 14:27:05.237455  950903 settings.go:142] acquiring lock: {Name:mkf7c5865cae55b4373a466e1a24783d8090ef1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:05.237527  950903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:27:05.238332  950903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:05.238571  950903 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 14:27:05.238985  950903 config.go:182] Loaded profile config "old-k8s-version-140749": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 14:27:05.239060  950903 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:27:05.239186  950903 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-140749"
	I0120 14:27:05.239205  950903 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-140749"
	W0120 14:27:05.239212  950903 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:27:05.239226  950903 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-140749"
	I0120 14:27:05.239238  950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
	I0120 14:27:05.239245  950903 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-140749"
	I0120 14:27:05.239633  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:27:05.239706  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:27:05.240072  950903 addons.go:69] Setting dashboard=true in profile "old-k8s-version-140749"
	I0120 14:27:05.240098  950903 addons.go:238] Setting addon dashboard=true in "old-k8s-version-140749"
	W0120 14:27:05.240112  950903 addons.go:247] addon dashboard should already be in state true
	I0120 14:27:05.240144  950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
	I0120 14:27:05.240645  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:27:05.243586  950903 out.go:177] * Verifying Kubernetes components...
	I0120 14:27:05.243875  950903 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-140749"
	I0120 14:27:05.243900  950903 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-140749"
	W0120 14:27:05.243909  950903 addons.go:247] addon metrics-server should already be in state true
	I0120 14:27:05.243943  950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
	I0120 14:27:05.244474  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:27:05.247012  950903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:27:05.356825  950903 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:27:05.361335  950903 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:27:05.365757  950903 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:05.365783  950903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:27:05.365861  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:05.367141  950903 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:27:05.371458  950903 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-140749"
	W0120 14:27:05.371506  950903 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:27:05.371535  950903 host.go:66] Checking if "old-k8s-version-140749" exists ...
	I0120 14:27:05.372097  950903 cli_runner.go:164] Run: docker container inspect old-k8s-version-140749 --format={{.State.Status}}
	I0120 14:27:05.372392  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:27:05.372419  950903 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:27:05.372491  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:05.429056  950903 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:27:05.432904  950903 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:27:05.432965  950903 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:27:05.433049  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:05.501417  950903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:27:05.506176  950903 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:05.506207  950903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:27:05.506291  950903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-140749
	I0120 14:27:05.508329  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:05.514397  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:05.564362  950903 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-140749" to be "Ready" ...
	I0120 14:27:05.605809  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:05.607714  950903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/old-k8s-version-140749/id_rsa Username:docker}
	I0120 14:27:05.767403  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:27:05.767482  950903 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:27:05.796135  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:05.826472  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:27:05.826563  950903 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:27:05.877079  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:05.902753  950903 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:27:05.902829  950903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:27:05.957816  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:27:05.957896  950903 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:27:06.046646  950903 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:27:06.046746  950903 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:27:06.215103  950903 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:27:06.215125  950903 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:27:06.232141  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:27:06.232161  950903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:27:06.386102  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 14:27:06.390615  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.390645  950903 retry.go:31] will retry after 196.091052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.414282  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:27:06.414359  950903 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0120 14:27:06.492837  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.492919  950903 retry.go:31] will retry after 153.669363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.510021  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:27:06.510099  950903 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:27:06.587276  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:06.646883  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 14:27:06.656676  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.656759  950903 retry.go:31] will retry after 329.873089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.714025  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:27:06.714106  950903 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:27:06.898316  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:27:06.898341  950903 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0120 14:27:06.971859  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.971890  950903 retry.go:31] will retry after 332.523585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:06.989425  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:27:07.079692  950903 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:27:07.079717  950903 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0120 14:27:07.257035  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.257064  950903 retry.go:31] will retry after 389.315008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:07.257114  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.257121  950903 retry.go:31] will retry after 311.201685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.259990  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:27:07.305393  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 14:27:07.499873  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.499917  950903 retry.go:31] will retry after 340.602335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:07.544876  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.544907  950903 retry.go:31] will retry after 761.060402ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.565496  950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
	I0120 14:27:07.568825  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:27:07.647360  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 14:27:07.710745  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.710774  950903 retry.go:31] will retry after 561.574304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:07.798469  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.798499  950903 retry.go:31] will retry after 374.389711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.841755  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 14:27:07.960225  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:07.960257  950903 retry.go:31] will retry after 216.314433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.173268  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:08.177769  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:27:08.273899  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:27:08.306384  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 14:27:08.350735  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.350774  950903 retry.go:31] will retry after 1.051790544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:08.350832  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.350846  950903 retry.go:31] will retry after 367.123054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:08.478156  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.478185  950903 retry.go:31] will retry after 451.185223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:08.478221  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.478227  950903 retry.go:31] will retry after 1.020972988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.718187  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 14:27:08.833054  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.833094  950903 retry.go:31] will retry after 1.060513552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:08.930483  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 14:27:09.011091  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:09.011125  950903 retry.go:31] will retry after 1.634293388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:09.403187  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:09.499612  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 14:27:09.504530  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:09.504570  950903 retry.go:31] will retry after 920.703674ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 14:27:09.637074  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:09.637110  950903 retry.go:31] will retry after 1.839561779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:09.894502  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 14:27:09.988456  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:09.988486  950903 retry.go:31] will retry after 1.288416794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:10.065096  950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
	I0120 14:27:10.425699  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 14:27:10.512242  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:10.512273  950903 retry.go:31] will retry after 1.183746708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:10.646163  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 14:27:10.775368  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:10.775399  950903 retry.go:31] will retry after 1.589620431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:11.277572  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 14:27:11.385863  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:11.385896  950903 retry.go:31] will retry after 1.201291143s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:11.477082  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 14:27:11.641364  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:11.641396  950903 retry.go:31] will retry after 2.295164699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:11.696649  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 14:27:11.829497  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:11.829527  950903 retry.go:31] will retry after 1.693668479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:12.065330  950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
	I0120 14:27:12.366227  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 14:27:12.521560  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:12.521607  950903 retry.go:31] will retry after 3.065694042s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:12.587501  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 14:27:12.719630  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:12.719662  950903 retry.go:31] will retry after 2.315984036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:13.523637  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 14:27:13.815173  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:13.815203  950903 retry.go:31] will retry after 4.036304951s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:13.937382  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:14.065492  950903 node_ready.go:53] error getting node "old-k8s-version-140749": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-140749": dial tcp 192.168.85.2:8443: connect: connection refused
	W0120 14:27:14.268764  950903 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:14.268793  950903 retry.go:31] will retry after 2.427911411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 14:27:15.036035  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:27:15.587504  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:27:16.697755  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:17.851680  950903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:23.705233  950903 node_ready.go:49] node "old-k8s-version-140749" has status "Ready":"True"
	I0120 14:27:23.705256  950903 node_ready.go:38] duration metric: took 18.140817176s for node "old-k8s-version-140749" to be "Ready" ...
	I0120 14:27:23.705266  950903 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:27:24.090962  950903 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-qsqbp" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:24.174534  950903 pod_ready.go:93] pod "coredns-74ff55c5b-qsqbp" in "kube-system" namespace has status "Ready":"True"
	I0120 14:27:24.174610  950903 pod_ready.go:82] duration metric: took 83.553392ms for pod "coredns-74ff55c5b-qsqbp" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:24.174637  950903 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:24.409236  950903 pod_ready.go:93] pod "etcd-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
	I0120 14:27:24.409321  950903 pod_ready.go:82] duration metric: took 234.656558ms for pod "etcd-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:24.409352  950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:26.208292  950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.620750711s)
	I0120 14:27:26.208331  950903 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-140749"
	I0120 14:27:26.208392  950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.510611845s)
	I0120 14:27:26.208417  950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.356715266s)
	I0120 14:27:26.208671  950903 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.172522419s)
	I0120 14:27:26.212035  950903 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-140749 addons enable metrics-server
	
	I0120 14:27:26.290132  950903 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0120 14:27:26.293685  950903 addons.go:514] duration metric: took 21.054624617s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0120 14:27:26.444133  950903 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:28.917700  950903 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:31.416051  950903 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:32.917715  950903 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
	I0120 14:27:32.917746  950903 pod_ready.go:82] duration metric: took 8.508356338s for pod "kube-apiserver-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:32.917758  950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:27:34.928348  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:37.426345  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:39.441421  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:41.924301  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:43.931714  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:46.424893  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:48.427826  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:50.431191  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:52.928759  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:54.928797  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:57.424755  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:27:59.924589  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:01.924859  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:03.924974  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:05.925579  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:08.425051  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:10.924139  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:12.926705  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:15.425527  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:17.924641  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:19.927625  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:22.430449  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:24.923697  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:26.924224  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:28.925595  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:30.928458  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:33.424309  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:35.424578  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:37.425229  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:39.924305  950903 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:41.433729  950903 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:41.433757  950903 pod_ready.go:82] duration metric: took 1m8.515990789s for pod "kube-controller-manager-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:41.433770  950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wrpl6" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:41.441339  950903 pod_ready.go:93] pod "kube-proxy-wrpl6" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:41.441367  950903 pod_ready.go:82] duration metric: took 7.589685ms for pod "kube-proxy-wrpl6" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:41.441387  950903 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:42.449722  950903 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:42.449748  950903 pod_ready.go:82] duration metric: took 1.008350501s for pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:42.449760  950903 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:44.455878  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:46.456063  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:48.456951  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:50.982318  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:53.457118  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:55.457832  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:57.957293  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:00.457254  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:02.957338  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:05.457281  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:07.956788  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:10.455727  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:12.455805  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:14.455919  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:16.956594  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:18.957055  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:20.957177  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:22.977897  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:25.456100  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:27.956285  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:29.957097  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:31.958545  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:34.520016  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:36.958082  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:39.455827  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:41.456412  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:43.465069  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:45.956385  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:48.456073  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:50.957169  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:53.456920  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:55.460163  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:57.956176  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:59.957071  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:01.967055  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:04.456138  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:06.956305  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:08.956902  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:11.455925  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:13.956200  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:15.956651  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:17.956978  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:19.957565  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:22.456276  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:24.970006  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:27.456774  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:29.463141  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:31.957178  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:34.455767  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:36.956611  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:39.456640  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:41.956918  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:43.973328  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:46.455494  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:48.455765  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:50.456716  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:52.956367  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:54.956544  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:57.457408  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:59.955937  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:01.957235  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:03.958142  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:06.461136  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:08.956483  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:10.956661  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:13.456294  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:15.456562  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:17.955801  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:19.956567  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:21.956906  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:23.956980  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:26.458636  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:28.957544  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:31.456217  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:33.956541  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:36.456146  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:38.456341  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:40.456681  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:42.955903  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:44.956479  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:47.456415  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:49.956064  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:51.956572  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:54.456227  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:56.456785  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:58.956968  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:00.957085  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:02.957264  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:04.962625  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:07.455559  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:09.455774  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:11.456500  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:13.956820  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:16.025898  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:18.457623  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:20.957089  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:23.456405  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:25.955753  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:28.456663  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:30.463692  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:32.956881  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:34.956937  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:36.960987  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:39.456248  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:41.456476  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:42.456347  950903 pod_ready.go:82] duration metric: took 4m0.0065748s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
	E0120 14:32:42.456373  950903 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:32:42.456384  950903 pod_ready.go:39] duration metric: took 5m18.75110665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:32:42.456400  950903 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:32:42.456430  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:32:42.456494  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:32:42.495561  950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:42.495581  950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:42.495586  950903 cri.go:89] found id: ""
	I0120 14:32:42.495593  950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
	I0120 14:32:42.495650  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.499420  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.502920  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:32:42.503009  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:32:42.542022  950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:42.542087  950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:42.542106  950903 cri.go:89] found id: ""
	I0120 14:32:42.542131  950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
	I0120 14:32:42.542221  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.546159  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.549559  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:32:42.549707  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:32:42.588844  950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:42.588910  950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:42.588931  950903 cri.go:89] found id: ""
	I0120 14:32:42.588965  950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
	I0120 14:32:42.589060  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.593064  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.596734  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:32:42.596827  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:32:42.637742  950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:42.637766  950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:42.637772  950903 cri.go:89] found id: ""
	I0120 14:32:42.637779  950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
	I0120 14:32:42.637837  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.641531  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.645214  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:32:42.645294  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:32:42.694848  950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:42.694873  950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:42.694878  950903 cri.go:89] found id: ""
	I0120 14:32:42.694885  950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
	I0120 14:32:42.694944  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.698884  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.702523  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:32:42.702604  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:32:42.744000  950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:42.744031  950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:42.744037  950903 cri.go:89] found id: ""
	I0120 14:32:42.744045  950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
	I0120 14:32:42.744145  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.748068  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.751593  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:32:42.751671  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:32:42.788738  950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:42.788761  950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:42.788766  950903 cri.go:89] found id: ""
	I0120 14:32:42.788773  950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
	I0120 14:32:42.788833  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.792694  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.796248  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:32:42.796327  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:32:42.835380  950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:42.835402  950903 cri.go:89] found id: ""
	I0120 14:32:42.835411  950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
	I0120 14:32:42.835470  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.839424  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:32:42.839588  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:32:42.886867  950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:42.886943  950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:42.886963  950903 cri.go:89] found id: ""
	I0120 14:32:42.886990  950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
	I0120 14:32:42.887084  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.892761  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.897255  950903 logs.go:123] Gathering logs for dmesg ...
	I0120 14:32:42.897281  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:32:42.915606  950903 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:32:42.915635  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:32:43.086993  950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
	I0120 14:32:43.087027  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:43.137045  950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
	I0120 14:32:43.137078  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:43.177316  950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
	I0120 14:32:43.177346  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:43.226521  950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
	I0120 14:32:43.226552  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:43.277166  950903 logs.go:123] Gathering logs for containerd ...
	I0120 14:32:43.277198  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:32:43.350057  950903 logs.go:123] Gathering logs for kubelet ...
	I0120 14:32:43.350162  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 14:32:43.415129  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899     663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.415423  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.415671  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.415917  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453     663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416155  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503     663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416381  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416607  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635     663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416848  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.425962  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.426161  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.428994  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.430807  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273     663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.431350  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.432061  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396     663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
	W0120 14:32:43.432532  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.432875  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.433786  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.436345  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.436803  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.437380  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.437740  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.437928  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.438346  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.438539  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.439145  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.439485  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.442279  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.442626  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.442824  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.443346  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.443537  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.444140  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.444330  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.444679  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.444873  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.445206  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.445390  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.445746  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.445986  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.446323  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.451516  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.451888  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.452090  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.452419  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.452606  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.453215  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.453555  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.453747  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.454085  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.454273  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.454460  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.454796  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.454982  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.455332  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.455568  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.455909  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.456251  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.456438  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.456624  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.456954  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.457154  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.457520  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	I0120 14:32:43.457532  950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
	I0120 14:32:43.457547  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:43.513402  950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
	I0120 14:32:43.513432  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:43.575002  950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
	I0120 14:32:43.575049  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:43.635251  950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
	I0120 14:32:43.635291  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:43.679772  950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
	I0120 14:32:43.679802  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:43.725126  950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
	I0120 14:32:43.725160  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:43.764221  950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
	I0120 14:32:43.764246  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:43.803933  950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
	I0120 14:32:43.803963  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:43.865136  950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
	I0120 14:32:43.865173  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:43.927846  950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
	I0120 14:32:43.927885  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:43.976062  950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
	I0120 14:32:43.976150  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:44.017480  950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
	I0120 14:32:44.017512  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:44.074744  950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
	I0120 14:32:44.074778  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:44.129782  950903 logs.go:123] Gathering logs for container status ...
	I0120 14:32:44.129812  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:32:44.177518  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:44.177547  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 14:32:44.177739  950903 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0120 14:32:44.177760  950903 out.go:270]   Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:44.177785  950903 out.go:270]   Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:44.177798  950903 out.go:270]   Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	  Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:44.177805  950903 out.go:270]   Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:44.177811  950903 out.go:270]   Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	  Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	I0120 14:32:44.177818  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:44.177825  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:32:54.183032  950903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:32:54.196296  950903 api_server.go:72] duration metric: took 5m48.957681866s to wait for apiserver process to appear ...
	I0120 14:32:54.196319  950903 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:32:54.196358  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:32:54.196418  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:32:54.237364  950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:54.237383  950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:54.237388  950903 cri.go:89] found id: ""
	I0120 14:32:54.237395  950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
	I0120 14:32:54.237452  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.241365  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.244944  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:32:54.245021  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:32:54.290562  950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:54.290585  950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:54.290590  950903 cri.go:89] found id: ""
	I0120 14:32:54.290598  950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
	I0120 14:32:54.290659  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.294510  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.298115  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:32:54.298194  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:32:54.343372  950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:54.343391  950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:54.343396  950903 cri.go:89] found id: ""
	I0120 14:32:54.343403  950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
	I0120 14:32:54.343464  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.349876  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.353487  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:32:54.353670  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:32:54.404374  950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:54.404402  950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:54.404407  950903 cri.go:89] found id: ""
	I0120 14:32:54.404415  950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
	I0120 14:32:54.404476  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.408537  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.412682  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:32:54.412783  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:32:54.460122  950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:54.460145  950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:54.460150  950903 cri.go:89] found id: ""
	I0120 14:32:54.460158  950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
	I0120 14:32:54.460215  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.464203  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.468701  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:32:54.468781  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:32:54.517365  950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:54.517389  950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:54.517394  950903 cri.go:89] found id: ""
	I0120 14:32:54.517401  950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
	I0120 14:32:54.517461  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.521673  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.525274  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:32:54.525351  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:32:54.571915  950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:54.571943  950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:54.571950  950903 cri.go:89] found id: ""
	I0120 14:32:54.571957  950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
	I0120 14:32:54.572019  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.576070  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.579794  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:32:54.579879  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:32:54.618519  950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:54.618588  950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:54.618600  950903 cri.go:89] found id: ""
	I0120 14:32:54.618609  950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
	I0120 14:32:54.618677  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.622286  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.625962  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:32:54.626082  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:32:54.665109  950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:54.665134  950903 cri.go:89] found id: ""
	I0120 14:32:54.665143  950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
	I0120 14:32:54.665201  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.668912  950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
	I0120 14:32:54.668936  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:54.731588  950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
	I0120 14:32:54.731623  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:54.798223  950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
	I0120 14:32:54.798262  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:54.849667  950903 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:32:54.849699  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:32:55.017611  950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
	I0120 14:32:55.017703  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:55.079897  950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
	I0120 14:32:55.079935  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:55.127145  950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
	I0120 14:32:55.127184  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:55.179168  950903 logs.go:123] Gathering logs for kubelet ...
	I0120 14:32:55.179197  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 14:32:55.231529  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899     663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.231791  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232001  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232213  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453     663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232424  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503     663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232643  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232867  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635     663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.233121  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.242036  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.242233  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.245063  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.246929  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273     663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.247464  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.248064  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396     663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
	W0120 14:32:55.248529  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.248857  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.249550  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.252091  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.252545  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.253009  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.253399  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.253597  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.253925  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.254111  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.254695  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.255022  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.258008  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.258363  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.258551  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.258884  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.259078  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.259667  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.259852  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.260180  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.260364  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.260690  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.260876  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.261204  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.261391  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.261725  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.264343  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.264682  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.264869  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.265195  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.265378  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.265970  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.266300  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.266484  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.266811  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.266995  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.267180  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.267508  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.267693  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.268018  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.268202  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.268526  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.268851  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.269034  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.269217  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.269551  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.269743  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.270064  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.270393  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.270576  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 14:32:55.270586  950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
	I0120 14:32:55.270600  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:55.318446  950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
	I0120 14:32:55.318482  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:55.374342  950903 logs.go:123] Gathering logs for dmesg ...
	I0120 14:32:55.374372  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:32:55.397751  950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
	I0120 14:32:55.397781  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:55.441396  950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
	I0120 14:32:55.441427  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:55.485012  950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
	I0120 14:32:55.485049  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:55.538388  950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
	I0120 14:32:55.538415  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:55.603551  950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
	I0120 14:32:55.603583  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:55.653716  950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
	I0120 14:32:55.653743  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:55.705317  950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
	I0120 14:32:55.705344  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:55.761106  950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
	I0120 14:32:55.761142  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:55.800636  950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
	I0120 14:32:55.800666  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:55.845669  950903 logs.go:123] Gathering logs for containerd ...
	I0120 14:32:55.845701  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:32:55.917760  950903 logs.go:123] Gathering logs for container status ...
	I0120 14:32:55.917799  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:32:55.994852  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:55.994879  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 14:32:55.994927  950903 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0120 14:32:55.994945  950903 out.go:270]   Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	  Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.994954  950903 out.go:270]   Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.994966  950903 out.go:270]   Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	  Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.994973  950903 out.go:270]   Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	  Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.994985  950903 out.go:270]   Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 14:32:55.994992  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:55.995007  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:33:05.995189  950903 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0120 14:33:06.005351  950903 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0120 14:33:06.009443  950903 out.go:201] 
	W0120 14:33:06.013033  950903 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0120 14:33:06.013087  950903 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0120 14:33:06.013119  950903 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0120 14:33:06.013130  950903 out.go:270] * 
	* 
	W0120 14:33:06.014124  950903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 14:33:06.017802  950903 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-140749
helpers_test.go:235: (dbg) docker inspect old-k8s-version-140749:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135",
	        "Created": "2025-01-20T14:24:14.373777705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 951200,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-20T14:26:57.317836884Z",
	            "FinishedAt": "2025-01-20T14:26:56.187336949Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/hosts",
	        "LogPath": "/var/lib/docker/containers/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135/b9e09679f40709bef351bb616c57cf6f23fe9043e87022271848a7818e2f1135-json.log",
	        "Name": "/old-k8s-version-140749",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-140749:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-140749",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937-init/diff:/var/lib/docker/overlay2/59354dd32046d8588beaaa77dbeeb3a26843a7c570ae5e66a22312f5030cf994/diff",
	                "MergedDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937/merged",
	                "UpperDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937/diff",
	                "WorkDir": "/var/lib/docker/overlay2/028860b59d8ea195b0fb1cfa13f453bb1f8e253fd630ca752a26ac065a860937/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-140749",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-140749/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-140749",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-140749",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-140749",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9a99c187cf4afeeb2ae3836d4a8f90eee55e8ebd52420897d9108ef7c986fcf",
	            "SandboxKey": "/var/run/docker/netns/f9a99c187cf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-140749": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "37f97352090f4bab13768da51fc0a8b4e0c2adb64e5d4d447c2ef43471e862ae",
	                    "EndpointID": "7a2d86ae765090951475cad42a22f7da12479d57c6791bc691908f84471040b8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-140749",
	                        "b9e09679f407"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140749 -n old-k8s-version-140749
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-140749 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-140749 logs -n 25: (2.08502213s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p pause-853381                                        | pause-853381             | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-071479                               | force-systemd-env-071479 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-071479                            | force-systemd-env-071479 | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	| start   | -p cert-expiration-857413                              | cert-expiration-857413   | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| pause   | -p pause-853381                                        | pause-853381             | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| unpause | -p pause-853381                                        | pause-853381             | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| pause   | -p pause-853381                                        | pause-853381             | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-853381                                        | pause-853381             | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-853381                                        | pause-853381             | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:23 UTC |
	| start   | -p cert-options-968792                                 | cert-options-968792      | jenkins | v1.35.0 | 20 Jan 25 14:23 UTC | 20 Jan 25 14:24 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-968792 ssh                                | cert-options-968792      | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:24 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-968792 -- sudo                         | cert-options-968792      | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:24 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-968792                                 | cert-options-968792      | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:24 UTC |
	| start   | -p old-k8s-version-140749                              | old-k8s-version-140749   | jenkins | v1.35.0 | 20 Jan 25 14:24 UTC | 20 Jan 25 14:26 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-140749        | old-k8s-version-140749   | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-140749                              | old-k8s-version-140749   | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p cert-expiration-857413                              | cert-expiration-857413   | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-140749             | old-k8s-version-140749   | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-140749                              | old-k8s-version-140749   | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-857413                              | cert-expiration-857413   | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	| start   | -p no-preload-193023                                   | no-preload-193023        | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:28 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-193023             | no-preload-193023        | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-193023                                   | no-preload-193023        | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-193023                  | no-preload-193023        | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-193023                                   | no-preload-193023        | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:28:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:28:41.934452  959078 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:28:41.934649  959078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:28:41.934681  959078 out.go:358] Setting ErrFile to fd 2...
	I0120 14:28:41.934703  959078 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:28:41.934986  959078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 14:28:41.935422  959078 out.go:352] Setting JSON to false
	I0120 14:28:41.936549  959078 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15067,"bootTime":1737368255,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 14:28:41.936659  959078 start.go:139] virtualization:  
	I0120 14:28:41.941796  959078 out.go:177] * [no-preload-193023] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 14:28:41.947838  959078 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:28:41.947902  959078 notify.go:220] Checking for updates...
	I0120 14:28:41.954426  959078 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:28:41.958248  959078 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:28:41.961722  959078 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 14:28:41.965621  959078 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 14:28:41.968528  959078 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:28:41.971959  959078 config.go:182] Loaded profile config "no-preload-193023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:28:41.972547  959078 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:28:41.995838  959078 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 14:28:41.995969  959078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 14:28:42.058529  959078 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 14:28:42.048122249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 14:28:42.058656  959078 docker.go:318] overlay module found
	I0120 14:28:42.061770  959078 out.go:177] * Using the docker driver based on existing profile
	I0120 14:28:42.064660  959078 start.go:297] selected driver: docker
	I0120 14:28:42.064724  959078 start.go:901] validating driver "docker" against &{Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:28:42.064847  959078 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:28:42.065979  959078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 14:28:42.125791  959078 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 14:28:42.11450711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 14:28:42.126355  959078 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:28:42.126411  959078 cni.go:84] Creating CNI manager for ""
	I0120 14:28:42.126461  959078 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 14:28:42.126512  959078 start.go:340] cluster config:
	{Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:28:42.134189  959078 out.go:177] * Starting "no-preload-193023" primary control-plane node in "no-preload-193023" cluster
	I0120 14:28:42.137286  959078 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 14:28:42.142408  959078 out.go:177] * Pulling base image v0.0.46 ...
	I0120 14:28:42.145528  959078 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 14:28:42.145661  959078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 14:28:42.145778  959078 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/config.json ...
	I0120 14:28:42.146265  959078 cache.go:107] acquiring lock: {Name:mk048b29a53f4d008c3052c3c6bc803c91b93e06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146395  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0120 14:28:42.146426  959078 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 155.479µs
	I0120 14:28:42.146446  959078 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0120 14:28:42.146462  959078 cache.go:107] acquiring lock: {Name:mkb7aaee8835795c6c014c1ce05248e5184973f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146480  959078 cache.go:107] acquiring lock: {Name:mkd49f8a3a7d8b62eaae6b30d36a72bc3f37b9c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146504  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0120 14:28:42.146511  959078 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 51.249µs
	I0120 14:28:42.146517  959078 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0120 14:28:42.146528  959078 cache.go:107] acquiring lock: {Name:mkb21e7156b8a8154a7bb49366e1b58ab4b63c90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146553  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 exists
	I0120 14:28:42.146557  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 exists
	I0120 14:28:42.146563  959078 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0" took 36.792µs
	I0120 14:28:42.146562  959078 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0" took 91.044µs
	I0120 14:28:42.146571  959078 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
	I0120 14:28:42.146578  959078 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0120 14:28:42.146584  959078 cache.go:107] acquiring lock: {Name:mk15007d771510bcbb3138dab20c2214e874bda2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146588  959078 cache.go:107] acquiring lock: {Name:mk6b4a9537d68dccdb743907a9c87d1a89dd16d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146620  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 exists
	I0120 14:28:42.146624  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0120 14:28:42.146627  959078 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0" took 44.587µs
	I0120 14:28:42.146631  959078 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 44.299µs
	I0120 14:28:42.146639  959078 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0120 14:28:42.146633  959078 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
	I0120 14:28:42.146651  959078 cache.go:107] acquiring lock: {Name:mk04156d8a3480876042b13078b6a9d379533b16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146657  959078 cache.go:107] acquiring lock: {Name:mkb58b2b584ebb6bcc71be907aa61ea8c3981782 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.146686  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 exists
	I0120 14:28:42.146691  959078 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0" took 35.347µs
	I0120 14:28:42.146697  959078 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
	I0120 14:28:42.146786  959078 cache.go:115] /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
	I0120 14:28:42.146799  959078 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0" took 153.084µs
	I0120 14:28:42.146808  959078 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
	I0120 14:28:42.146815  959078 cache.go:87] Successfully saved all images to host disk.
	I0120 14:28:42.171787  959078 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0120 14:28:42.171824  959078 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0120 14:28:42.171842  959078 cache.go:227] Successfully downloaded all kic artifacts
	I0120 14:28:42.171884  959078 start.go:360] acquireMachinesLock for no-preload-193023: {Name:mk47940fca7af88b855cfd6901e9b3ed9ca36828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:28:42.171955  959078 start.go:364] duration metric: took 48.295µs to acquireMachinesLock for "no-preload-193023"
	I0120 14:28:42.171985  959078 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:28:42.171998  959078 fix.go:54] fixHost starting: 
	I0120 14:28:42.172290  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	I0120 14:28:42.202038  959078 fix.go:112] recreateIfNeeded on no-preload-193023: state=Stopped err=<nil>
	W0120 14:28:42.202075  959078 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:28:42.205634  959078 out.go:177] * Restarting existing docker container for "no-preload-193023" ...
	I0120 14:28:42.449722  950903 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:42.449748  950903 pod_ready.go:82] duration metric: took 1.008350501s for pod "kube-scheduler-old-k8s-version-140749" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:42.449760  950903 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:44.455878  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:46.456063  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:42.210275  959078 cli_runner.go:164] Run: docker start no-preload-193023
	I0120 14:28:42.593875  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	I0120 14:28:42.616542  959078 kic.go:430] container "no-preload-193023" state is running.
	I0120 14:28:42.616960  959078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-193023
	I0120 14:28:42.643247  959078 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/config.json ...
	I0120 14:28:42.643494  959078 machine.go:93] provisionDockerMachine start ...
	I0120 14:28:42.643558  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:42.664656  959078 main.go:141] libmachine: Using SSH client type: native
	I0120 14:28:42.664932  959078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I0120 14:28:42.664943  959078 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:28:42.666926  959078 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0120 14:28:45.797211  959078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-193023
	
	I0120 14:28:45.797247  959078 ubuntu.go:169] provisioning hostname "no-preload-193023"
	I0120 14:28:45.797313  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:45.817153  959078 main.go:141] libmachine: Using SSH client type: native
	I0120 14:28:45.817402  959078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I0120 14:28:45.817418  959078 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-193023 && echo "no-preload-193023" | sudo tee /etc/hostname
	I0120 14:28:45.968830  959078 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-193023
	
	I0120 14:28:45.968949  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:45.989557  959078 main.go:141] libmachine: Using SSH client type: native
	I0120 14:28:45.989845  959078 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I0120 14:28:45.989871  959078 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-193023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-193023/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-193023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:28:46.118033  959078 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:28:46.118062  959078 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20242-741865/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-741865/.minikube}
	I0120 14:28:46.118083  959078 ubuntu.go:177] setting up certificates
	I0120 14:28:46.118093  959078 provision.go:84] configureAuth start
	I0120 14:28:46.118156  959078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-193023
	I0120 14:28:46.135664  959078 provision.go:143] copyHostCerts
	I0120 14:28:46.135731  959078 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem, removing ...
	I0120 14:28:46.135740  959078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem
	I0120 14:28:46.135817  959078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/ca.pem (1078 bytes)
	I0120 14:28:46.135919  959078 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem, removing ...
	I0120 14:28:46.135924  959078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem
	I0120 14:28:46.135950  959078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/cert.pem (1123 bytes)
	I0120 14:28:46.136013  959078 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem, removing ...
	I0120 14:28:46.136017  959078 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem
	I0120 14:28:46.136040  959078 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-741865/.minikube/key.pem (1679 bytes)
	I0120 14:28:46.136096  959078 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem org=jenkins.no-preload-193023 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-193023]
	I0120 14:28:46.648310  959078 provision.go:177] copyRemoteCerts
	I0120 14:28:46.648393  959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:28:46.648446  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:46.668332  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:46.763001  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0120 14:28:46.790698  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 14:28:46.816629  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:28:46.846580  959078 provision.go:87] duration metric: took 728.473299ms to configureAuth
	I0120 14:28:46.846608  959078 ubuntu.go:193] setting minikube options for container-runtime
	I0120 14:28:46.846825  959078 config.go:182] Loaded profile config "no-preload-193023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:28:46.846846  959078 machine.go:96] duration metric: took 4.203338307s to provisionDockerMachine
	I0120 14:28:46.846857  959078 start.go:293] postStartSetup for "no-preload-193023" (driver="docker")
	I0120 14:28:46.846868  959078 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:28:46.846928  959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:28:46.846975  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:46.864260  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:46.965911  959078 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:28:46.969797  959078 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 14:28:46.969829  959078 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 14:28:46.969840  959078 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 14:28:46.969847  959078 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 14:28:46.969858  959078 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/addons for local assets ...
	I0120 14:28:46.969913  959078 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-741865/.minikube/files for local assets ...
	I0120 14:28:46.970002  959078 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem -> 7472562.pem in /etc/ssl/certs
	I0120 14:28:46.970118  959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:28:46.982497  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /etc/ssl/certs/7472562.pem (1708 bytes)
	I0120 14:28:47.013212  959078 start.go:296] duration metric: took 166.323315ms for postStartSetup
	I0120 14:28:47.013309  959078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 14:28:47.013375  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:47.033830  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:47.119356  959078 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 14:28:47.124063  959078 fix.go:56] duration metric: took 4.952056491s for fixHost
	I0120 14:28:47.124090  959078 start.go:83] releasing machines lock for "no-preload-193023", held for 4.952121509s
	I0120 14:28:47.124164  959078 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-193023
	I0120 14:28:47.141660  959078 ssh_runner.go:195] Run: cat /version.json
	I0120 14:28:47.141675  959078 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:28:47.141719  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:47.141748  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:47.161506  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:47.163155  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:47.253186  959078 ssh_runner.go:195] Run: systemctl --version
	I0120 14:28:47.412717  959078 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 14:28:47.417121  959078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0120 14:28:47.437365  959078 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0120 14:28:47.437450  959078 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:28:47.446391  959078 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 14:28:47.446415  959078 start.go:495] detecting cgroup driver to use...
	I0120 14:28:47.446447  959078 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 14:28:47.446507  959078 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 14:28:47.463661  959078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 14:28:47.475659  959078 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:28:47.475758  959078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:28:47.489412  959078 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:28:47.502116  959078 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:28:47.587426  959078 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:28:47.680557  959078 docker.go:233] disabling docker service ...
	I0120 14:28:47.680629  959078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:28:47.695122  959078 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:28:47.707679  959078 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:28:47.810797  959078 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:28:47.894497  959078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:28:47.906280  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:28:47.922898  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 14:28:47.934981  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 14:28:47.944796  959078 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 14:28:47.944882  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 14:28:47.960339  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:28:47.972162  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 14:28:47.984379  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 14:28:47.995362  959078 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:28:48.005518  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 14:28:48.018222  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 14:28:48.030511  959078 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 14:28:48.043542  959078 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:28:48.053928  959078 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:28:48.064076  959078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:28:48.162576  959078 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 14:28:48.334577  959078 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 14:28:48.334664  959078 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 14:28:48.347835  959078 start.go:563] Will wait 60s for crictl version
	I0120 14:28:48.347938  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:28:48.352844  959078 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:28:48.395647  959078 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0120 14:28:48.395733  959078 ssh_runner.go:195] Run: containerd --version
	I0120 14:28:48.420985  959078 ssh_runner.go:195] Run: containerd --version
	I0120 14:28:48.456099  959078 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
	I0120 14:28:48.459224  959078 cli_runner.go:164] Run: docker network inspect no-preload-193023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 14:28:48.475876  959078 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0120 14:28:48.479645  959078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:28:48.491349  959078 kubeadm.go:883] updating cluster {Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:28:48.491488  959078 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 14:28:48.491544  959078 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:28:48.533247  959078 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 14:28:48.533280  959078 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:28:48.533291  959078 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 containerd true true} ...
	I0120 14:28:48.533390  959078 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-193023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:28:48.533458  959078 ssh_runner.go:195] Run: sudo crictl info
	I0120 14:28:48.578112  959078 cni.go:84] Creating CNI manager for ""
	I0120 14:28:48.578140  959078 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 14:28:48.578152  959078 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:28:48.578198  959078 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-193023 NodeName:no-preload-193023 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:28:48.578350  959078 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-193023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:28:48.578434  959078 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:28:48.589680  959078 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:28:48.589755  959078 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:28:48.598748  959078 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0120 14:28:48.617702  959078 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:28:48.637865  959078 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
	I0120 14:28:48.657128  959078 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0120 14:28:48.660887  959078 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:28:48.672098  959078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:28:48.767168  959078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:28:48.782013  959078 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023 for IP: 192.168.76.2
	I0120 14:28:48.782033  959078 certs.go:194] generating shared ca certs ...
	I0120 14:28:48.782049  959078 certs.go:226] acquiring lock for ca certs: {Name:mka7a6ccd7d8b5f47789c70c8e6dc479acdcdb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:48.782194  959078 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key
	I0120 14:28:48.782237  959078 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key
	I0120 14:28:48.782244  959078 certs.go:256] generating profile certs ...
	I0120 14:28:48.782331  959078 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.key
	I0120 14:28:48.782397  959078 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/apiserver.key.0e8d29cc
	I0120 14:28:48.782436  959078 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/proxy-client.key
	I0120 14:28:48.782549  959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem (1338 bytes)
	W0120 14:28:48.782578  959078 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256_empty.pem, impossibly tiny 0 bytes
	I0120 14:28:48.782586  959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca-key.pem (1679 bytes)
	I0120 14:28:48.782611  959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/ca.pem (1078 bytes)
	I0120 14:28:48.782633  959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:28:48.782654  959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/certs/key.pem (1679 bytes)
	I0120 14:28:48.782696  959078 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem (1708 bytes)
	I0120 14:28:48.783319  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:28:48.813102  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 14:28:48.840210  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:28:48.874022  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0120 14:28:48.910896  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0120 14:28:48.962748  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:28:49.026063  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:28:49.062008  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:28:49.089566  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/ssl/certs/7472562.pem --> /usr/share/ca-certificates/7472562.pem (1708 bytes)
	I0120 14:28:49.117075  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:28:49.150625  959078 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-741865/.minikube/certs/747256.pem --> /usr/share/ca-certificates/747256.pem (1338 bytes)
	I0120 14:28:49.176558  959078 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:28:49.195049  959078 ssh_runner.go:195] Run: openssl version
	I0120 14:28:49.202402  959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7472562.pem && ln -fs /usr/share/ca-certificates/7472562.pem /etc/ssl/certs/7472562.pem"
	I0120 14:28:49.212757  959078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7472562.pem
	I0120 14:28:49.216539  959078 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 13:48 /usr/share/ca-certificates/7472562.pem
	I0120 14:28:49.216609  959078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7472562.pem
	I0120 14:28:49.224654  959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7472562.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:28:49.234055  959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:28:49.243912  959078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:28:49.248090  959078 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:28:49.248191  959078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:28:49.255437  959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:28:49.264759  959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/747256.pem && ln -fs /usr/share/ca-certificates/747256.pem /etc/ssl/certs/747256.pem"
	I0120 14:28:49.274641  959078 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/747256.pem
	I0120 14:28:49.278567  959078 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 13:48 /usr/share/ca-certificates/747256.pem
	I0120 14:28:49.278637  959078 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/747256.pem
	I0120 14:28:49.285809  959078 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/747256.pem /etc/ssl/certs/51391683.0"
	I0120 14:28:49.295368  959078 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:28:49.299196  959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:28:49.306411  959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:28:49.313425  959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:28:49.320497  959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:28:49.327945  959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:28:49.335613  959078 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:28:49.343175  959078 kubeadm.go:392] StartCluster: {Name:no-preload-193023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-193023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:28:49.343280  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 14:28:49.343361  959078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:28:49.387893  959078 cri.go:89] found id: "d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
	I0120 14:28:49.387924  959078 cri.go:89] found id: "5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
	I0120 14:28:49.387931  959078 cri.go:89] found id: "209a98ebe1a7a0dbea3c6eecf2c4710020cb40136d3fb46c485448c0bd63dd5c"
	I0120 14:28:49.387944  959078 cri.go:89] found id: "057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
	I0120 14:28:49.387948  959078 cri.go:89] found id: "3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
	I0120 14:28:49.387952  959078 cri.go:89] found id: "8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
	I0120 14:28:49.387955  959078 cri.go:89] found id: "949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
	I0120 14:28:49.387959  959078 cri.go:89] found id: "3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
	I0120 14:28:49.387962  959078 cri.go:89] found id: ""
	I0120 14:28:49.388023  959078 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 14:28:49.410583  959078 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T14:28:49Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 14:28:49.410720  959078 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:28:49.422690  959078 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:28:49.422712  959078 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:28:49.422766  959078 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:28:49.442635  959078 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:28:49.443231  959078 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-193023" does not appear in /home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:28:49.443499  959078 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-741865/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-193023" cluster setting kubeconfig missing "no-preload-193023" context setting]
	I0120 14:28:49.443982  959078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:49.445422  959078 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:28:49.466001  959078 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0120 14:28:49.466036  959078 kubeadm.go:597] duration metric: took 43.316809ms to restartPrimaryControlPlane
	I0120 14:28:49.466046  959078 kubeadm.go:394] duration metric: took 122.881048ms to StartCluster
	I0120 14:28:49.466061  959078 settings.go:142] acquiring lock: {Name:mkf7c5865cae55b4373a466e1a24783d8090ef1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:49.466127  959078 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:28:49.467087  959078 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-741865/kubeconfig: {Name:mkcf7578b1c91d60616ac7150d8566b28a92e8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:49.467342  959078 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 14:28:49.467645  959078 config.go:182] Loaded profile config "no-preload-193023": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:28:49.467688  959078 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:28:49.467754  959078 addons.go:69] Setting storage-provisioner=true in profile "no-preload-193023"
	I0120 14:28:49.467777  959078 addons.go:238] Setting addon storage-provisioner=true in "no-preload-193023"
	I0120 14:28:49.467776  959078 addons.go:69] Setting default-storageclass=true in profile "no-preload-193023"
	I0120 14:28:49.467787  959078 addons.go:69] Setting metrics-server=true in profile "no-preload-193023"
	I0120 14:28:49.467796  959078 addons.go:238] Setting addon metrics-server=true in "no-preload-193023"
	I0120 14:28:49.467798  959078 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-193023"
	W0120 14:28:49.467801  959078 addons.go:247] addon metrics-server should already be in state true
	I0120 14:28:49.467824  959078 host.go:66] Checking if "no-preload-193023" exists ...
	I0120 14:28:49.468130  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	I0120 14:28:49.468291  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	W0120 14:28:49.467783  959078 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:28:49.470699  959078 host.go:66] Checking if "no-preload-193023" exists ...
	I0120 14:28:49.472122  959078 addons.go:69] Setting dashboard=true in profile "no-preload-193023"
	I0120 14:28:49.472279  959078 addons.go:238] Setting addon dashboard=true in "no-preload-193023"
	W0120 14:28:49.472310  959078 addons.go:247] addon dashboard should already be in state true
	I0120 14:28:49.472365  959078 host.go:66] Checking if "no-preload-193023" exists ...
	I0120 14:28:49.473336  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	I0120 14:28:49.478349  959078 out.go:177] * Verifying Kubernetes components...
	I0120 14:28:49.478663  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	I0120 14:28:49.486605  959078 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:28:49.531345  959078 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:28:49.532665  959078 addons.go:238] Setting addon default-storageclass=true in "no-preload-193023"
	W0120 14:28:49.532720  959078 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:28:49.532748  959078 host.go:66] Checking if "no-preload-193023" exists ...
	I0120 14:28:49.533289  959078 cli_runner.go:164] Run: docker container inspect no-preload-193023 --format={{.State.Status}}
	I0120 14:28:49.534746  959078 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:28:49.534777  959078 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:28:49.534838  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:49.561080  959078 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:28:49.561083  959078 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:28:49.564066  959078 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:28:49.564093  959078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:28:49.564161  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:49.567270  959078 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:28:48.456951  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:50.982318  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:49.570157  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:28:49.570188  959078 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:28:49.570267  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:49.596846  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:49.628235  959078 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:28:49.628259  959078 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:28:49.628338  959078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-193023
	I0120 14:28:49.642259  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:49.667798  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:49.685737  959078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/no-preload-193023/id_rsa Username:docker}
	I0120 14:28:49.769073  959078 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:28:49.894409  959078 node_ready.go:35] waiting up to 6m0s for node "no-preload-193023" to be "Ready" ...
	I0120 14:28:49.903776  959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:28:49.920169  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:28:49.920192  959078 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:28:49.958063  959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:28:49.991758  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:28:49.991860  959078 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:28:50.058449  959078 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:28:50.058527  959078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:28:50.247810  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:28:50.247891  959078 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:28:50.252097  959078 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:28:50.252178  959078 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:28:50.396278  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:28:50.396352  959078 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:28:50.553054  959078 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:28:50.553152  959078 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:28:50.639266  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:28:50.639365  959078 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:28:50.658647  959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:28:50.706576  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:28:50.706654  959078 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:28:50.801743  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:28:50.801821  959078 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:28:50.878748  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:28:50.878827  959078 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:28:50.937360  959078 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:28:50.937435  959078 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:28:51.040054  959078 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:28:53.457118  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:55.457832  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:55.266392  959078 node_ready.go:49] node "no-preload-193023" has status "Ready":"True"
	I0120 14:28:55.266419  959078 node_ready.go:38] duration metric: took 5.371925706s for node "no-preload-193023" to be "Ready" ...
	I0120 14:28:55.266431  959078 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:28:55.334254  959078 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-g577w" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.481179  959078 pod_ready.go:93] pod "coredns-668d6bf9bc-g577w" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:55.481263  959078 pod_ready.go:82] duration metric: took 146.914733ms for pod "coredns-668d6bf9bc-g577w" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.481290  959078 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.531555  959078 pod_ready.go:93] pod "etcd-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:55.531633  959078 pod_ready.go:82] duration metric: took 50.304596ms for pod "etcd-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.531664  959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.549776  959078 pod_ready.go:93] pod "kube-apiserver-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:55.549852  959078 pod_ready.go:82] duration metric: took 18.164852ms for pod "kube-apiserver-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.549879  959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.571955  959078 pod_ready.go:93] pod "kube-controller-manager-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:55.572030  959078 pod_ready.go:82] duration metric: took 22.129003ms for pod "kube-controller-manager-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.572077  959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z8rcv" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.581026  959078 pod_ready.go:93] pod "kube-proxy-z8rcv" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:55.581103  959078 pod_ready.go:82] duration metric: took 8.999422ms for pod "kube-proxy-z8rcv" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.581130  959078 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.887639  959078 pod_ready.go:93] pod "kube-scheduler-no-preload-193023" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:55.887663  959078 pod_ready.go:82] duration metric: took 306.512834ms for pod "kube-scheduler-no-preload-193023" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:55.887676  959078 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:57.907239  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:58.940210  959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.036387851s)
	I0120 14:28:58.940267  959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.982183186s)
	I0120 14:28:58.940496  959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.281772043s)
	I0120 14:28:58.940520  959078 addons.go:479] Verifying addon metrics-server=true in "no-preload-193023"
	I0120 14:28:59.015410  959078 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.975267561s)
	I0120 14:28:59.017672  959078 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-193023 addons enable metrics-server
	
	I0120 14:28:59.020744  959078 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0120 14:28:57.957293  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:00.457254  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:59.023663  959078 addons.go:514] duration metric: took 9.555963446s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0120 14:29:00.395416  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:02.957338  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:05.457281  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:02.903173  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:05.395728  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:07.956788  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:10.455727  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:07.896086  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:10.394655  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:12.455805  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:14.455919  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:12.894094  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:15.393412  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:16.956594  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:18.957055  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:20.957177  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:17.394225  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:19.893727  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:21.893836  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:22.977897  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:25.456100  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:24.399007  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:26.893504  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:27.956285  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:29.957097  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:28.893812  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:30.894410  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:31.958545  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:34.520016  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:33.393814  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:35.394329  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:36.958082  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:39.455827  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:41.456412  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:37.394592  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:39.894028  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:41.896110  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:43.465069  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:45.956385  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:44.396749  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:46.894839  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:48.456073  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:50.957169  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:48.895226  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:51.394693  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:53.456920  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:55.460163  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:53.395055  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:55.894809  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:57.956176  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:59.957071  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:29:57.895103  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:00.395066  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:01.967055  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:04.456138  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:02.395939  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:04.895164  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:06.956305  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:08.956902  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:11.455925  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:07.394018  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:09.894498  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:13.956200  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:15.956651  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:12.394245  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:14.394553  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:16.894092  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:17.956978  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:19.957565  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:18.894776  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:20.894930  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:22.456276  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:24.970006  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:22.895024  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:25.393928  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:27.456774  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:29.463141  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:27.893755  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:29.894301  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:31.894673  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:31.957178  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:34.455767  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:34.394388  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:36.394438  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:36.956611  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:39.456640  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:38.893795  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:40.895452  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:41.956918  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:43.973328  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:46.455494  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:43.393821  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:45.394862  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:48.455765  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:50.456716  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:47.395240  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:49.894996  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:52.956367  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:54.956544  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:52.394309  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:54.893802  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:56.893865  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:57.457408  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:59.955937  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:30:58.895334  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:01.394470  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:01.957235  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:03.958142  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:06.461136  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:03.894290  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:06.393508  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:08.956483  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:10.956661  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:08.396251  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:10.895316  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:13.456294  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:15.456562  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:13.393381  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:15.394188  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:17.955801  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:19.956567  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:17.894291  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:20.394280  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:21.956906  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:23.956980  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:26.458636  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:22.394363  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:24.895061  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:28.957544  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:31.456217  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:27.393843  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:29.394069  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:31.394854  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:33.956541  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:36.456146  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:33.395061  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:35.894478  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:38.456341  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:40.456681  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:38.394475  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:40.893832  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:42.955903  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:44.956479  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:42.894112  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:44.894511  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:47.456415  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:49.956064  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:47.394133  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:49.394619  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:51.395046  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:51.956572  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:54.456227  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:56.456785  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:53.894916  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:56.393831  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:58.956968  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:00.957085  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:31:58.894685  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:01.393485  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:02.957264  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:04.962625  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:03.393802  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:05.895359  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:07.455559  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:09.455774  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:11.456500  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:08.394166  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:10.894733  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:13.956820  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:16.025898  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:13.394534  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:15.893513  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:18.457623  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:20.957089  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:17.894547  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:20.393947  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:23.456405  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:25.955753  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:22.394706  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:24.894931  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:28.456663  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:30.463692  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:27.393756  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:29.394726  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:31.894215  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:32.956881  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:34.956937  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:34.394675  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:36.894091  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:36.960987  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:39.456248  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:41.456476  950903 pod_ready.go:103] pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:39.394473  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:41.394782  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:42.456347  950903 pod_ready.go:82] duration metric: took 4m0.0065748s for pod "metrics-server-9975d5f86-lfq2q" in "kube-system" namespace to be "Ready" ...
	E0120 14:32:42.456373  950903 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:32:42.456384  950903 pod_ready.go:39] duration metric: took 5m18.75110665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:32:42.456400  950903 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:32:42.456430  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:32:42.456494  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:32:42.495561  950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:42.495581  950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:42.495586  950903 cri.go:89] found id: ""
	I0120 14:32:42.495593  950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
	I0120 14:32:42.495650  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.499420  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.502920  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:32:42.503009  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:32:42.542022  950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:42.542087  950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:42.542106  950903 cri.go:89] found id: ""
	I0120 14:32:42.542131  950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
	I0120 14:32:42.542221  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.546159  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.549559  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:32:42.549707  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:32:42.588844  950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:42.588910  950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:42.588931  950903 cri.go:89] found id: ""
	I0120 14:32:42.588965  950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
	I0120 14:32:42.589060  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.593064  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.596734  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:32:42.596827  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:32:42.637742  950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:42.637766  950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:42.637772  950903 cri.go:89] found id: ""
	I0120 14:32:42.637779  950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
	I0120 14:32:42.637837  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.641531  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.645214  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:32:42.645294  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:32:42.694848  950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:42.694873  950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:42.694878  950903 cri.go:89] found id: ""
	I0120 14:32:42.694885  950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
	I0120 14:32:42.694944  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.698884  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.702523  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:32:42.702604  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:32:42.744000  950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:42.744031  950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:42.744037  950903 cri.go:89] found id: ""
	I0120 14:32:42.744045  950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
	I0120 14:32:42.744145  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.748068  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.751593  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:32:42.751671  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:32:42.788738  950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:42.788761  950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:42.788766  950903 cri.go:89] found id: ""
	I0120 14:32:42.788773  950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
	I0120 14:32:42.788833  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.792694  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.796248  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:32:42.796327  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:32:42.835380  950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:42.835402  950903 cri.go:89] found id: ""
	I0120 14:32:42.835411  950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
	I0120 14:32:42.835470  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.839424  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:32:42.839588  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:32:42.886867  950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:42.886943  950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:42.886963  950903 cri.go:89] found id: ""
	I0120 14:32:42.886990  950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
	I0120 14:32:42.887084  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.892761  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:42.897255  950903 logs.go:123] Gathering logs for dmesg ...
	I0120 14:32:42.897281  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:32:42.915606  950903 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:32:42.915635  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:32:43.086993  950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
	I0120 14:32:43.087027  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:43.137045  950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
	I0120 14:32:43.137078  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:43.177316  950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
	I0120 14:32:43.177346  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:43.226521  950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
	I0120 14:32:43.226552  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:43.277166  950903 logs.go:123] Gathering logs for containerd ...
	I0120 14:32:43.277198  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:32:43.350057  950903 logs.go:123] Gathering logs for kubelet ...
	I0120 14:32:43.350162  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 14:32:43.415129  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899     663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.415423  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.415671  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.415917  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453     663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416155  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503     663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416381  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416607  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635     663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.416848  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.425962  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.426161  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.428994  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.430807  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273     663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:43.431350  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.432061  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396     663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
	W0120 14:32:43.432532  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.432875  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.433786  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.436345  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.436803  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.437380  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.437740  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.437928  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.438346  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.438539  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.439145  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.439485  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.442279  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.442626  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.442824  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.443346  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.443537  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.444140  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.444330  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.444679  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.444873  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.445206  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.445390  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.445746  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.445986  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.446323  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.451516  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:43.451888  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.452090  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.452419  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.452606  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.453215  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.453555  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.453747  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.454085  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.454273  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.454460  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.454796  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.454982  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.455332  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.455568  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.455909  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.456251  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.456438  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.456624  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.456954  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:43.457154  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:43.457520  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	I0120 14:32:43.457532  950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
	I0120 14:32:43.457547  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:43.513402  950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
	I0120 14:32:43.513432  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:43.575002  950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
	I0120 14:32:43.575049  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:43.635251  950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
	I0120 14:32:43.635291  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:43.679772  950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
	I0120 14:32:43.679802  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:43.725126  950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
	I0120 14:32:43.725160  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:43.764221  950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
	I0120 14:32:43.764246  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:43.803933  950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
	I0120 14:32:43.803963  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:43.865136  950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
	I0120 14:32:43.865173  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:43.927846  950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
	I0120 14:32:43.927885  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:43.976062  950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
	I0120 14:32:43.976150  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:44.017480  950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
	I0120 14:32:44.017512  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:44.074744  950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
	I0120 14:32:44.074778  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:44.129782  950903 logs.go:123] Gathering logs for container status ...
	I0120 14:32:44.129812  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:32:44.177518  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:44.177547  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 14:32:44.177739  950903 out.go:270] X Problems detected in kubelet:
	W0120 14:32:44.177760  950903 out.go:270]   Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:44.177785  950903 out.go:270]   Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:44.177798  950903 out.go:270]   Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:44.177805  950903 out.go:270]   Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:44.177811  950903 out.go:270]   Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	I0120 14:32:44.177818  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:44.177825  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:32:43.395313  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:45.894471  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:48.397393  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:50.893995  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:54.183032  950903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:32:54.196296  950903 api_server.go:72] duration metric: took 5m48.957681866s to wait for apiserver process to appear ...
	I0120 14:32:54.196319  950903 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:32:54.196358  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:32:54.196418  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:32:54.237364  950903 cri.go:89] found id: "7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:54.237383  950903 cri.go:89] found id: "032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:54.237388  950903 cri.go:89] found id: ""
	I0120 14:32:54.237395  950903 logs.go:282] 2 containers: [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63]
	I0120 14:32:54.237452  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.241365  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.244944  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:32:54.245021  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:32:54.290562  950903 cri.go:89] found id: "260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:54.290585  950903 cri.go:89] found id: "4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:54.290590  950903 cri.go:89] found id: ""
	I0120 14:32:54.290598  950903 logs.go:282] 2 containers: [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b]
	I0120 14:32:54.290659  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.294510  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.298115  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:32:54.298194  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:32:54.343372  950903 cri.go:89] found id: "df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:54.343391  950903 cri.go:89] found id: "49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:54.343396  950903 cri.go:89] found id: ""
	I0120 14:32:54.343403  950903 logs.go:282] 2 containers: [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d]
	I0120 14:32:54.343464  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.349876  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.353487  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:32:54.353670  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:32:54.404374  950903 cri.go:89] found id: "901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:54.404402  950903 cri.go:89] found id: "a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:54.404407  950903 cri.go:89] found id: ""
	I0120 14:32:54.404415  950903 logs.go:282] 2 containers: [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071]
	I0120 14:32:54.404476  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.408537  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.412682  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:32:54.412783  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:32:54.460122  950903 cri.go:89] found id: "980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:54.460145  950903 cri.go:89] found id: "4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:54.460150  950903 cri.go:89] found id: ""
	I0120 14:32:54.460158  950903 logs.go:282] 2 containers: [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25]
	I0120 14:32:54.460215  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.464203  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.468701  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:32:54.468781  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:32:54.517365  950903 cri.go:89] found id: "cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:54.517389  950903 cri.go:89] found id: "f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:54.517394  950903 cri.go:89] found id: ""
	I0120 14:32:54.517401  950903 logs.go:282] 2 containers: [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec]
	I0120 14:32:54.517461  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.521673  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.525274  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:32:54.525351  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:32:54.571915  950903 cri.go:89] found id: "15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:54.571943  950903 cri.go:89] found id: "4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:54.571950  950903 cri.go:89] found id: ""
	I0120 14:32:54.571957  950903 logs.go:282] 2 containers: [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f]
	I0120 14:32:54.572019  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.576070  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.579794  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:32:54.579879  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:32:54.618519  950903 cri.go:89] found id: "0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:54.618588  950903 cri.go:89] found id: "46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:54.618600  950903 cri.go:89] found id: ""
	I0120 14:32:54.618609  950903 logs.go:282] 2 containers: [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa]
	I0120 14:32:54.618677  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.622286  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.625962  950903 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:32:54.626082  950903 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:32:54.665109  950903 cri.go:89] found id: "c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:54.665134  950903 cri.go:89] found id: ""
	I0120 14:32:54.665143  950903 logs.go:282] 1 containers: [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6]
	I0120 14:32:54.665201  950903 ssh_runner.go:195] Run: which crictl
	I0120 14:32:54.668912  950903 logs.go:123] Gathering logs for kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] ...
	I0120 14:32:54.668936  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45"
	I0120 14:32:54.731588  950903 logs.go:123] Gathering logs for kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] ...
	I0120 14:32:54.731623  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec"
	I0120 14:32:54.798223  950903 logs.go:123] Gathering logs for kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] ...
	I0120 14:32:54.798262  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed"
	I0120 14:32:54.849667  950903 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:32:54.849699  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:32:55.017611  950903 logs.go:123] Gathering logs for kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] ...
	I0120 14:32:55.017703  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c"
	I0120 14:32:55.079897  950903 logs.go:123] Gathering logs for kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] ...
	I0120 14:32:55.079935  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071"
	I0120 14:32:55.127145  950903 logs.go:123] Gathering logs for kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] ...
	I0120 14:32:55.127184  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d"
	I0120 14:32:55.179168  950903 logs.go:123] Gathering logs for kubelet ...
	I0120 14:32:55.179197  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 14:32:55.231529  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.690899     663 reflector.go:138] object-"kube-system"/"coredns-token-f95sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-f95sh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.231791  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691117     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232001  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691376     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232213  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691453     663 reflector.go:138] object-"default"/"default-token-8wp7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8wp7x" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232424  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691503     663 reflector.go:138] object-"kube-system"/"kindnet-token-xx7dh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xx7dh" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232643  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691562     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-s6tbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s6tbt" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.232867  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.691635     663 reflector.go:138] object-"kube-system"/"metrics-server-token-dgscp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dgscp" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.233121  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:23 old-k8s-version-140749 kubelet[663]: E0120 14:27:23.692028     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mlrbf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-mlrbf" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.242036  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:27 old-k8s-version-140749 kubelet[663]: E0120 14:27:27.904251     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.242233  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:28 old-k8s-version-140749 kubelet[663]: E0120 14:27:28.466147     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.245063  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:41 old-k8s-version-140749 kubelet[663]: E0120 14:27:41.953783     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.246929  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:43 old-k8s-version-140749 kubelet[663]: E0120 14:27:43.761273     663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-xh79t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-xh79t" is forbidden: User "system:node:old-k8s-version-140749" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-140749' and this object
	W0120 14:32:55.247464  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:55 old-k8s-version-140749 kubelet[663]: E0120 14:27:55.965627     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.248064  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:57 old-k8s-version-140749 kubelet[663]: E0120 14:27:57.605396     663 pod_workers.go:191] Error syncing pod e9c231b5-a5c1-498d-aa26-caf987208dc2 ("storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e9c231b5-a5c1-498d-aa26-caf987208dc2)"
	W0120 14:32:55.248529  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:58 old-k8s-version-140749 kubelet[663]: E0120 14:27:58.615334     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.248857  950903 logs.go:138] Found kubelet problem: Jan 20 14:27:59 old-k8s-version-140749 kubelet[663]: E0120 14:27:59.633074     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.249550  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:05 old-k8s-version-140749 kubelet[663]: E0120 14:28:05.586265     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.252091  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:06 old-k8s-version-140749 kubelet[663]: E0120 14:28:06.962070     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.252545  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:17 old-k8s-version-140749 kubelet[663]: E0120 14:28:17.944316     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.253009  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:18 old-k8s-version-140749 kubelet[663]: E0120 14:28:18.685866     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.253399  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:25 old-k8s-version-140749 kubelet[663]: E0120 14:28:25.586275     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.253597  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:32 old-k8s-version-140749 kubelet[663]: E0120 14:28:32.945271     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.253925  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:37 old-k8s-version-140749 kubelet[663]: E0120 14:28:37.943400     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.254111  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:46 old-k8s-version-140749 kubelet[663]: E0120 14:28:46.943760     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.254695  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:51 old-k8s-version-140749 kubelet[663]: E0120 14:28:51.768837     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.255022  950903 logs.go:138] Found kubelet problem: Jan 20 14:28:55 old-k8s-version-140749 kubelet[663]: E0120 14:28:55.585724     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.258008  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:00 old-k8s-version-140749 kubelet[663]: E0120 14:29:00.952397     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.258363  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:10 old-k8s-version-140749 kubelet[663]: E0120 14:29:10.942909     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.258551  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:11 old-k8s-version-140749 kubelet[663]: E0120 14:29:11.944209     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.258884  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:22 old-k8s-version-140749 kubelet[663]: E0120 14:29:22.951255     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.259078  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:23 old-k8s-version-140749 kubelet[663]: E0120 14:29:23.956425     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.259667  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:35 old-k8s-version-140749 kubelet[663]: E0120 14:29:35.903419     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.259852  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:36 old-k8s-version-140749 kubelet[663]: E0120 14:29:36.945915     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.260180  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:45 old-k8s-version-140749 kubelet[663]: E0120 14:29:45.585844     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.260364  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:51 old-k8s-version-140749 kubelet[663]: E0120 14:29:51.943550     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.260690  950903 logs.go:138] Found kubelet problem: Jan 20 14:29:59 old-k8s-version-140749 kubelet[663]: E0120 14:29:59.943021     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.260876  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:05 old-k8s-version-140749 kubelet[663]: E0120 14:30:05.943119     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.261204  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:13 old-k8s-version-140749 kubelet[663]: E0120 14:30:13.942813     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.261391  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:17 old-k8s-version-140749 kubelet[663]: E0120 14:30:17.943166     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.261725  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.943282     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.264343  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:28 old-k8s-version-140749 kubelet[663]: E0120 14:30:28.959102     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 14:32:55.264682  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:40 old-k8s-version-140749 kubelet[663]: E0120 14:30:40.946333     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.264869  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:43 old-k8s-version-140749 kubelet[663]: E0120 14:30:43.946388     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.265195  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:52 old-k8s-version-140749 kubelet[663]: E0120 14:30:52.943384     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.265378  950903 logs.go:138] Found kubelet problem: Jan 20 14:30:57 old-k8s-version-140749 kubelet[663]: E0120 14:30:57.943462     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.265970  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:04 old-k8s-version-140749 kubelet[663]: E0120 14:31:04.184881     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.266300  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:05 old-k8s-version-140749 kubelet[663]: E0120 14:31:05.586278     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.266484  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:11 old-k8s-version-140749 kubelet[663]: E0120 14:31:11.943489     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.266811  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:19 old-k8s-version-140749 kubelet[663]: E0120 14:31:19.942873     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.266995  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.267180  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.267508  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.267693  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.268018  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.268202  950903 logs.go:138] Found kubelet problem: Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.268526  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.268851  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.269034  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.269217  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.269551  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.269743  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.270064  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.270393  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.270576  950903 logs.go:138] Found kubelet problem: Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 14:32:55.270586  950903 logs.go:123] Gathering logs for etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] ...
	I0120 14:32:55.270600  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b"
	I0120 14:32:55.318446  950903 logs.go:123] Gathering logs for coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] ...
	I0120 14:32:55.318482  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b"
	I0120 14:32:55.374342  950903 logs.go:123] Gathering logs for dmesg ...
	I0120 14:32:55.374372  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:32:55.397751  950903 logs.go:123] Gathering logs for etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] ...
	I0120 14:32:55.397781  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b"
	I0120 14:32:55.441396  950903 logs.go:123] Gathering logs for kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] ...
	I0120 14:32:55.441427  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25"
	I0120 14:32:55.485012  950903 logs.go:123] Gathering logs for kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] ...
	I0120 14:32:55.485049  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f"
	I0120 14:32:55.538388  950903 logs.go:123] Gathering logs for storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] ...
	I0120 14:32:55.538415  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233"
	I0120 14:32:55.603551  950903 logs.go:123] Gathering logs for storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] ...
	I0120 14:32:55.603583  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa"
	I0120 14:32:55.653716  950903 logs.go:123] Gathering logs for kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] ...
	I0120 14:32:55.653743  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6"
	I0120 14:32:55.705317  950903 logs.go:123] Gathering logs for kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] ...
	I0120 14:32:55.705344  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63"
	I0120 14:32:55.761106  950903 logs.go:123] Gathering logs for coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] ...
	I0120 14:32:55.761142  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d"
	I0120 14:32:55.800636  950903 logs.go:123] Gathering logs for kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] ...
	I0120 14:32:55.800666  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f"
	I0120 14:32:55.845669  950903 logs.go:123] Gathering logs for containerd ...
	I0120 14:32:55.845701  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:32:55.917760  950903 logs.go:123] Gathering logs for container status ...
	I0120 14:32:55.917799  950903 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:32:55.994852  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:55.994879  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 14:32:55.994927  950903 out.go:270] X Problems detected in kubelet:
	W0120 14:32:55.994945  950903 out.go:270]   Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.994954  950903 out.go:270]   Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 14:32:55.994966  950903 out.go:270]   Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.994973  950903 out.go:270]   Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	W0120 14:32:55.994985  950903 out.go:270]   Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 14:32:55.994992  950903 out.go:358] Setting ErrFile to fd 2...
	I0120 14:32:55.995007  950903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:32:53.394116  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:55.394996  959078 pod_ready.go:103] pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace has status "Ready":"False"
	I0120 14:32:55.897007  959078 pod_ready.go:82] duration metric: took 4m0.009316185s for pod "metrics-server-f79f97bbb-675vb" in "kube-system" namespace to be "Ready" ...
	E0120 14:32:55.897032  959078 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:32:55.897043  959078 pod_ready.go:39] duration metric: took 4m0.630600399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:32:55.897057  959078 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:32:55.897084  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:32:55.897143  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:32:55.950262  959078 cri.go:89] found id: "6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
	I0120 14:32:55.950284  959078 cri.go:89] found id: "3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
	I0120 14:32:55.950290  959078 cri.go:89] found id: ""
	I0120 14:32:55.950297  959078 logs.go:282] 2 containers: [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d]
	I0120 14:32:55.950357  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:55.955259  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:55.966194  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:32:55.966277  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:32:56.024403  959078 cri.go:89] found id: "dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
	I0120 14:32:56.024423  959078 cri.go:89] found id: "8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
	I0120 14:32:56.024428  959078 cri.go:89] found id: ""
	I0120 14:32:56.024436  959078 logs.go:282] 2 containers: [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa]
	I0120 14:32:56.024500  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.029413  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.034504  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:32:56.034584  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:32:56.076966  959078 cri.go:89] found id: "629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
	I0120 14:32:56.076993  959078 cri.go:89] found id: "d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
	I0120 14:32:56.076998  959078 cri.go:89] found id: ""
	I0120 14:32:56.077006  959078 logs.go:282] 2 containers: [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386]
	I0120 14:32:56.077080  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.081115  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.086545  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:32:56.086669  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:32:56.126757  959078 cri.go:89] found id: "12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
	I0120 14:32:56.126782  959078 cri.go:89] found id: "3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
	I0120 14:32:56.126788  959078 cri.go:89] found id: ""
	I0120 14:32:56.126796  959078 logs.go:282] 2 containers: [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f]
	I0120 14:32:56.126859  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.130545  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.134075  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:32:56.134178  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:32:56.181428  959078 cri.go:89] found id: "93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
	I0120 14:32:56.181452  959078 cri.go:89] found id: "057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
	I0120 14:32:56.181456  959078 cri.go:89] found id: ""
	I0120 14:32:56.181463  959078 logs.go:282] 2 containers: [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1]
	I0120 14:32:56.181554  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.185580  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.189242  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:32:56.189321  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:32:56.242280  959078 cri.go:89] found id: "e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
	I0120 14:32:56.242301  959078 cri.go:89] found id: "949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
	I0120 14:32:56.242306  959078 cri.go:89] found id: ""
	I0120 14:32:56.242314  959078 logs.go:282] 2 containers: [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5]
	I0120 14:32:56.242371  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.246205  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.250037  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:32:56.250117  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:32:56.291743  959078 cri.go:89] found id: "5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
	I0120 14:32:56.291766  959078 cri.go:89] found id: "5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
	I0120 14:32:56.291772  959078 cri.go:89] found id: ""
	I0120 14:32:56.291779  959078 logs.go:282] 2 containers: [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315]
	I0120 14:32:56.291838  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.295404  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.302789  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:32:56.302876  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:32:56.358332  959078 cri.go:89] found id: "a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
	I0120 14:32:56.358355  959078 cri.go:89] found id: ""
	I0120 14:32:56.358364  959078 logs.go:282] 1 containers: [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6]
	I0120 14:32:56.358419  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.362394  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:32:56.362475  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:32:56.417727  959078 cri.go:89] found id: "f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
	I0120 14:32:56.417749  959078 cri.go:89] found id: "b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
	I0120 14:32:56.417754  959078 cri.go:89] found id: ""
	I0120 14:32:56.417761  959078 logs.go:282] 2 containers: [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb]
	I0120 14:32:56.417817  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.421278  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:32:56.425257  959078 logs.go:123] Gathering logs for kubernetes-dashboard [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6] ...
	I0120 14:32:56.425287  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
	I0120 14:32:56.477196  959078 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:32:56.477225  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:32:56.629894  959078 logs.go:123] Gathering logs for kube-apiserver [3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d] ...
	I0120 14:32:56.629964  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
	I0120 14:32:56.685123  959078 logs.go:123] Gathering logs for etcd [8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa] ...
	I0120 14:32:56.685400  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
	I0120 14:32:56.745201  959078 logs.go:123] Gathering logs for coredns [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e] ...
	I0120 14:32:56.745278  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
	I0120 14:32:56.792649  959078 logs.go:123] Gathering logs for kube-controller-manager [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb] ...
	I0120 14:32:56.792682  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
	I0120 14:32:56.859606  959078 logs.go:123] Gathering logs for kube-controller-manager [949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5] ...
	I0120 14:32:56.859646  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
	I0120 14:32:56.922708  959078 logs.go:123] Gathering logs for container status ...
	I0120 14:32:56.922750  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:32:56.982342  959078 logs.go:123] Gathering logs for kindnet [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b] ...
	I0120 14:32:56.982373  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
	I0120 14:32:57.033231  959078 logs.go:123] Gathering logs for kubelet ...
	I0120 14:32:57.033261  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:32:57.122895  959078 logs.go:123] Gathering logs for dmesg ...
	I0120 14:32:57.122936  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:32:57.139634  959078 logs.go:123] Gathering logs for kube-apiserver [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0] ...
	I0120 14:32:57.139686  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
	I0120 14:32:57.194713  959078 logs.go:123] Gathering logs for kube-scheduler [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90] ...
	I0120 14:32:57.194847  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
	I0120 14:32:57.234732  959078 logs.go:123] Gathering logs for kube-scheduler [3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f] ...
	I0120 14:32:57.234760  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
	I0120 14:32:57.287201  959078 logs.go:123] Gathering logs for kube-proxy [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830] ...
	I0120 14:32:57.287241  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
	I0120 14:32:57.347927  959078 logs.go:123] Gathering logs for kube-proxy [057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1] ...
	I0120 14:32:57.347961  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
	I0120 14:32:57.399223  959078 logs.go:123] Gathering logs for storage-provisioner [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8] ...
	I0120 14:32:57.399255  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
	I0120 14:32:57.440813  959078 logs.go:123] Gathering logs for containerd ...
	I0120 14:32:57.440895  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:32:57.507987  959078 logs.go:123] Gathering logs for etcd [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1] ...
	I0120 14:32:57.508026  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
	I0120 14:32:57.557577  959078 logs.go:123] Gathering logs for coredns [d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386] ...
	I0120 14:32:57.557681  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
	I0120 14:32:57.601620  959078 logs.go:123] Gathering logs for kindnet [5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315] ...
	I0120 14:32:57.601649  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
	I0120 14:32:57.645710  959078 logs.go:123] Gathering logs for storage-provisioner [b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb] ...
	I0120 14:32:57.645736  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
	I0120 14:33:00.192118  959078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:33:00.212066  959078 api_server.go:72] duration metric: took 4m10.744671563s to wait for apiserver process to appear ...
	I0120 14:33:00.212152  959078 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:33:00.212230  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:33:00.212349  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:33:00.272543  959078 cri.go:89] found id: "6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
	I0120 14:33:00.272572  959078 cri.go:89] found id: "3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
	I0120 14:33:00.272580  959078 cri.go:89] found id: ""
	I0120 14:33:00.272588  959078 logs.go:282] 2 containers: [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d]
	I0120 14:33:00.272683  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.282927  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.290020  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 14:33:00.290143  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:33:00.357040  959078 cri.go:89] found id: "dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
	I0120 14:33:00.357067  959078 cri.go:89] found id: "8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
	I0120 14:33:00.357073  959078 cri.go:89] found id: ""
	I0120 14:33:00.357080  959078 logs.go:282] 2 containers: [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa]
	I0120 14:33:00.357147  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.362205  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.366997  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 14:33:00.367100  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:33:00.412221  959078 cri.go:89] found id: "629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
	I0120 14:33:00.412288  959078 cri.go:89] found id: "d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
	I0120 14:33:00.412310  959078 cri.go:89] found id: ""
	I0120 14:33:00.412324  959078 logs.go:282] 2 containers: [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386]
	I0120 14:33:00.412402  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.416260  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.419715  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:33:00.419799  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:33:00.459298  959078 cri.go:89] found id: "12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
	I0120 14:33:00.459321  959078 cri.go:89] found id: "3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
	I0120 14:33:00.459327  959078 cri.go:89] found id: ""
	I0120 14:33:00.459334  959078 logs.go:282] 2 containers: [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f]
	I0120 14:33:00.459397  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.463492  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.467676  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:33:00.467806  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:33:00.514141  959078 cri.go:89] found id: "93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
	I0120 14:33:00.514209  959078 cri.go:89] found id: "057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
	I0120 14:33:00.514220  959078 cri.go:89] found id: ""
	I0120 14:33:00.514229  959078 logs.go:282] 2 containers: [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1]
	I0120 14:33:00.514299  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.518838  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.522280  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:33:00.522421  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:33:00.562126  959078 cri.go:89] found id: "e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
	I0120 14:33:00.562150  959078 cri.go:89] found id: "949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
	I0120 14:33:00.562162  959078 cri.go:89] found id: ""
	I0120 14:33:00.562171  959078 logs.go:282] 2 containers: [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5]
	I0120 14:33:00.562228  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.565971  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.569419  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 14:33:00.569504  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:33:00.610762  959078 cri.go:89] found id: "5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
	I0120 14:33:00.610837  959078 cri.go:89] found id: "5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
	I0120 14:33:00.610857  959078 cri.go:89] found id: ""
	I0120 14:33:00.610873  959078 logs.go:282] 2 containers: [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315]
	I0120 14:33:00.610950  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.614551  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.618361  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 14:33:00.618458  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 14:33:00.658033  959078 cri.go:89] found id: "f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
	I0120 14:33:00.658104  959078 cri.go:89] found id: "b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
	I0120 14:33:00.658117  959078 cri.go:89] found id: ""
	I0120 14:33:00.658126  959078 logs.go:282] 2 containers: [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb]
	I0120 14:33:00.658189  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.662156  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.665641  959078 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:33:00.665722  959078 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:33:00.710602  959078 cri.go:89] found id: "a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
	I0120 14:33:00.710676  959078 cri.go:89] found id: ""
	I0120 14:33:00.710691  959078 logs.go:282] 1 containers: [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6]
	I0120 14:33:00.710757  959078 ssh_runner.go:195] Run: which crictl
	I0120 14:33:00.714662  959078 logs.go:123] Gathering logs for coredns [d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386] ...
	I0120 14:33:00.714697  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d841d0d9d12d448c8d8a29e50a4d0a4a0f4a12d18c3fb25f9c17bbb781a75386"
	I0120 14:33:00.753753  959078 logs.go:123] Gathering logs for kube-controller-manager [e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb] ...
	I0120 14:33:00.753836  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5207742d51d06125f4d994b97dae57dcace73c62ca87a1916eebc45224afcdb"
	I0120 14:33:00.815366  959078 logs.go:123] Gathering logs for kube-controller-manager [949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5] ...
	I0120 14:33:00.815404  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 949ddc985615bd2bc37c114b819ec9a8d31dfc4f19e732682a527294d3a2fce5"
	I0120 14:33:00.876962  959078 logs.go:123] Gathering logs for kindnet [5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b] ...
	I0120 14:33:00.876996  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a29bd6fbe365cbdde42debc89bcb48ebac79358b09ff50aa25a3bb8a065d94b"
	I0120 14:33:00.926741  959078 logs.go:123] Gathering logs for dmesg ...
	I0120 14:33:00.926773  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:33:00.950697  959078 logs.go:123] Gathering logs for kube-apiserver [3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d] ...
	I0120 14:33:00.950726  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3232cbcc4e897578f16681148db0a9d9160bd4f125b558a6c771f34a3c79770d"
	I0120 14:33:01.022434  959078 logs.go:123] Gathering logs for etcd [dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1] ...
	I0120 14:33:01.022469  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dadd685b33de3c75bdf229fb999e86f8956e53b81fb3d519b4d80b2cb02a06a1"
	I0120 14:33:01.066738  959078 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:33:01.066774  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 14:33:01.200945  959078 logs.go:123] Gathering logs for kindnet [5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315] ...
	I0120 14:33:01.200982  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5db0db7d5805425abf0607f0d9b61b5583c8b2a97fc6fcd29703b22a0fd76315"
	I0120 14:33:01.245299  959078 logs.go:123] Gathering logs for storage-provisioner [f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8] ...
	I0120 14:33:01.245330  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f51abd121ca88e192c362b7bcb29d9e9382cf29ebc67e2bfe93cef71a6ca8ea8"
	I0120 14:33:01.312614  959078 logs.go:123] Gathering logs for containerd ...
	I0120 14:33:01.312642  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 14:33:01.387399  959078 logs.go:123] Gathering logs for container status ...
	I0120 14:33:01.387465  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:33:01.437969  959078 logs.go:123] Gathering logs for kubelet ...
	I0120 14:33:01.438000  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:33:01.524000  959078 logs.go:123] Gathering logs for kube-apiserver [6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0] ...
	I0120 14:33:01.524037  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6710b141c2fc48fe6b58113db0a0a6b327bc06e844f0194f70f8c884768514d0"
	I0120 14:33:01.578285  959078 logs.go:123] Gathering logs for etcd [8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa] ...
	I0120 14:33:01.578321  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3b5e306cd3342009ef2f67860ae88e8dabba7c5d58da97bceca74eede7fcfa"
	I0120 14:33:01.623669  959078 logs.go:123] Gathering logs for kube-proxy [93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830] ...
	I0120 14:33:01.623704  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a3524c3cec9715160d00833f325c2215a9c992890f6e2b10499cd2fdfd6830"
	I0120 14:33:01.669218  959078 logs.go:123] Gathering logs for kube-proxy [057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1] ...
	I0120 14:33:01.669250  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 057fd8c5325ad174d633d84b3f03de4dbe7550475bb02ffc8830dfff1181a3f1"
	I0120 14:33:01.721751  959078 logs.go:123] Gathering logs for storage-provisioner [b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb] ...
	I0120 14:33:01.721780  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b63139e0b6e72161d01c2ef7be49b99d4c9a0bd9ea30311c349dd63f2faa1deb"
	I0120 14:33:01.764526  959078 logs.go:123] Gathering logs for kubernetes-dashboard [a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6] ...
	I0120 14:33:01.764555  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a41a9ceba7b0f6977464d259868f862098ae569969adfba952a8c1911695dcb6"
	I0120 14:33:01.811149  959078 logs.go:123] Gathering logs for coredns [629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e] ...
	I0120 14:33:01.811179  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 629f31bac51f8ca79c962e232f87c2ca74c2032da9e0b6da17f70a274ba88b2e"
	I0120 14:33:01.857926  959078 logs.go:123] Gathering logs for kube-scheduler [12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90] ...
	I0120 14:33:01.858015  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 12f5d80cf8f33474af9f93d2874d287bb5987c9a8eed154bfb5c9a0657b26b90"
	I0120 14:33:01.901271  959078 logs.go:123] Gathering logs for kube-scheduler [3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f] ...
	I0120 14:33:01.901300  959078 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0feebe4fd6614e3066f9ed379d35c483de503c256c82e59d5034e280c08e9f"
	I0120 14:33:05.995189  950903 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0120 14:33:06.005351  950903 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0120 14:33:06.009443  950903 out.go:201] 
	W0120 14:33:06.013033  950903 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0120 14:33:06.013087  950903 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0120 14:33:06.013119  950903 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0120 14:33:06.013130  950903 out.go:270] * 
	W0120 14:33:06.014124  950903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 14:33:06.017802  950903 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	7695074e176e0       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   34d743f08be0e       dashboard-metrics-scraper-8d5bb5db8-glscn
	0731b37e3a8d5       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   3c15f60dbf5b9       storage-provisioner
	c1745625d0923       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   34ec62419cbc9       kubernetes-dashboard-cd95d586-rckbc
	15e6eca40378b       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   5cea1cf13e7b6       kindnet-7z8qd
	46dbef2bf421e       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   3c15f60dbf5b9       storage-provisioner
	df227ea0cd40a       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   33e86d1ef1601       coredns-74ff55c5b-qsqbp
	980a433503981       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   d007e9d87e1d4       kube-proxy-wrpl6
	d36017b735848       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   808c2b428e0fb       busybox
	cf07d13821464       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   ebf23fa5e76a0       kube-controller-manager-old-k8s-version-140749
	7cbffdc94e647       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   d8c067ab50b6f       kube-apiserver-old-k8s-version-140749
	901324074aae3       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   ee07298b7a09c       kube-scheduler-old-k8s-version-140749
	260a4c4121f58       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   6b43306dbf679       etcd-old-k8s-version-140749
	296c4154063f4       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   80c26978fc2bd       busybox
	49305c6d7d9da       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   27cf8a620ac5f       coredns-74ff55c5b-qsqbp
	4b0e77b57208a       2be0bcf609c65       7 minutes ago       Exited              kindnet-cni                 0                   262232e33b738       kindnet-7z8qd
	4161d34b27869       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   a7655b564d91f       kube-proxy-wrpl6
	4dc67e60f527c       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   45122d8ef8f9e       etcd-old-k8s-version-140749
	a38942066106c       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   7e2cb02745e89       kube-scheduler-old-k8s-version-140749
	f1fd5c8cbb787       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   f667b2f8b920f       kube-controller-manager-old-k8s-version-140749
	032a69713fb6a       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   998b8736a7e8b       kube-apiserver-old-k8s-version-140749
	
	
	==> containerd <==
	Jan 20 14:29:00 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:00.951886437Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 14:29:34 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:34.950990972Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Jan 20 14:29:34 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:34.974698206Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\""
	Jan 20 14:29:34 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:34.975748295Z" level=info msg="StartContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\""
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.057059200Z" level=info msg="StartContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" returns successfully"
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.057332202Z" level=info msg="received exit event container_id:\"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" id:\"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" pid:3101 exit_status:255 exited_at:{seconds:1737383375 nanos:56479136}"
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.084398301Z" level=info msg="shim disconnected" id=810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c namespace=k8s.io
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.084465633Z" level=warning msg="cleaning up after shim disconnected" id=810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c namespace=k8s.io
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.084518310Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.905431906Z" level=info msg="RemoveContainer for \"379fb20749554b9c5354559de926426f3a11f312adb0394e659c317412686a8a\""
	Jan 20 14:29:35 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:29:35.919071004Z" level=info msg="RemoveContainer for \"379fb20749554b9c5354559de926426f3a11f312adb0394e659c317412686a8a\" returns successfully"
	Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.949694920Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.955017979Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.958062365Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jan 20 14:30:28 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:30:28.958173282Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 14:31:03 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:03.944750242Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jan 20 14:31:03 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:03.965704357Z" level=info msg="CreateContainer within sandbox \"34d743f08be0e3144c74273abd1d5c121fb5a75561a4bb7e6c59af7060fc1109\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\""
	Jan 20 14:31:03 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:03.966502161Z" level=info msg="StartContainer for \"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\""
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.037924212Z" level=info msg="StartContainer for \"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\" returns successfully"
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.037994104Z" level=info msg="received exit event container_id:\"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\" id:\"7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370\" pid:3331 exit_status:255 exited_at:{seconds:1737383464 nanos:37380760}"
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.064498371Z" level=info msg="shim disconnected" id=7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370 namespace=k8s.io
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.064562634Z" level=warning msg="cleaning up after shim disconnected" id=7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370 namespace=k8s.io
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.064575180Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.187184408Z" level=info msg="RemoveContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\""
	Jan 20 14:31:04 old-k8s-version-140749 containerd[568]: time="2025-01-20T14:31:04.200993893Z" level=info msg="RemoveContainer for \"810c080e08199082ed021a5fab944aa6cfca2d73fcca6758804ef6ac2578a58c\" returns successfully"
	
	
	==> coredns [49305c6d7d9dad7b4fd674601bb6bd22715c8b8e4492586025b945e08261a47d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:50938 - 37407 "HINFO IN 9169729266110919042.8718230186487203758. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011532726s
	
	
	==> coredns [df227ea0cd40a06e2a4ce199c6e568d0cc4f73c8aaab1d998ecfb9aa875f3f1b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:45027 - 43048 "HINFO IN 1347004244483538843.143973130411976210. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012318517s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0120 14:27:57.287915       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 14:27:27.287417121 +0000 UTC m=+0.032230241) (total time: 30.000395308s):
	Trace[2019727887]: [30.000395308s] [30.000395308s] END
	E0120 14:27:57.287951       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0120 14:27:57.288045       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 14:27:27.287817795 +0000 UTC m=+0.032630915) (total time: 30.000216272s):
	Trace[939984059]: [30.000216272s] [30.000216272s] END
	E0120 14:27:57.288050       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0120 14:27:57.288390       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 14:27:27.288034329 +0000 UTC m=+0.032847441) (total time: 30.000340252s):
	Trace[911902081]: [30.000340252s] [30.000340252s] END
	E0120 14:27:57.288403       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-140749
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-140749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=old-k8s-version-140749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T14_24_51_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 14:24:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-140749
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 14:33:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 14:28:14 +0000   Mon, 20 Jan 2025 14:24:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 14:28:14 +0000   Mon, 20 Jan 2025 14:24:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 14:28:14 +0000   Mon, 20 Jan 2025 14:24:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 14:28:14 +0000   Mon, 20 Jan 2025 14:25:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-140749
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 4382c30ae98a43cd9832cdf594ab0620
	  System UUID:                9ce90266-c33e-4cc5-b4a8-d30bd2d0e32d
	  Boot ID:                    1cf72276-e5cc-4a75-95c3-e1897ed2b9a5
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 coredns-74ff55c5b-qsqbp                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m
	  kube-system                 etcd-old-k8s-version-140749                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m8s
	  kube-system                 kindnet-7z8qd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m
	  kube-system                 kube-apiserver-old-k8s-version-140749             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-controller-manager-old-k8s-version-140749    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-proxy-wrpl6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-scheduler-old-k8s-version-140749             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 metrics-server-9975d5f86-lfq2q                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m22s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-glscn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-rckbc               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m28s (x5 over 8m29s)  kubelet     Node old-k8s-version-140749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s (x5 over 8m29s)  kubelet     Node old-k8s-version-140749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s (x4 over 8m29s)  kubelet     Node old-k8s-version-140749 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m8s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m8s                   kubelet     Node old-k8s-version-140749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m8s                   kubelet     Node old-k8s-version-140749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s                   kubelet     Node old-k8s-version-140749 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m                     kubelet     Node old-k8s-version-140749 status is now: NodeReady
	  Normal  Starting                 7m59s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m55s)  kubelet     Node old-k8s-version-140749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x8 over 5m55s)  kubelet     Node old-k8s-version-140749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x7 over 5m55s)  kubelet     Node old-k8s-version-140749 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan20 14:12] systemd-journald[216]: Failed to send WATCHDOG=1 notification message: Connection refused
	
	
	==> etcd [260a4c4121f5862ff8f52117d5179ac2f79d4f64e1abd45f4977a0c8aee20c8b] <==
	2025-01-20 14:28:59.221447 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:29:09.221468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:29:19.221333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:29:29.221185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:29:39.221227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:29:49.221425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:29:59.221377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:30:09.221279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:30:19.221409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:30:29.221197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:30:39.221353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:30:49.221245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:30:59.221549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:31:09.221380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:31:19.222302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:31:29.221198 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:31:39.221240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:31:49.221357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:31:59.221214 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:32:09.221388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:32:19.221341 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:32:29.221184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:32:39.221504 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:32:49.221192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:32:59.221318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [4dc67e60f527c40b47bcc9b98e67ef2a48fe137b0c178e3a70f757294733ee5b] <==
	2025-01-20 14:24:39.930436 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2025/01/20 14:24:40 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2025/01/20 14:24:40 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2025-01-20 14:24:40.704849 I | etcdserver: published {Name:old-k8s-version-140749 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2025-01-20 14:24:40.705040 I | embed: ready to serve client requests
	2025-01-20 14:24:40.706590 I | embed: serving client requests on 192.168.85.2:2379
	2025-01-20 14:24:40.745632 I | etcdserver: setting up the initial cluster version to 3.4
	2025-01-20 14:24:40.746761 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-01-20 14:24:40.746961 I | embed: ready to serve client requests
	2025-01-20 14:24:40.760897 I | embed: serving client requests on 127.0.0.1:2379
	2025-01-20 14:24:40.830673 I | etcdserver/api: enabled capabilities for version 3.4
	2025-01-20 14:25:04.292803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:25:13.415355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:25:23.415224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:25:33.415203 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:25:43.415229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:25:53.415193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:26:03.415251 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:26:13.415363 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:26:23.415161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:26:33.415215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 14:26:43.415410 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 14:33:07 up  4:15,  0 users,  load average: 0.97, 2.15, 2.80
	Linux old-k8s-version-140749 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [15e6eca40378bd8e64f1463de9e07056d1984b23e44c23fcce3037d81ac483ed] <==
	I0120 14:30:58.622867       1 main.go:301] handling current node
	I0120 14:31:08.624120       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:31:08.624355       1 main.go:301] handling current node
	I0120 14:31:18.630874       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:31:18.630918       1 main.go:301] handling current node
	I0120 14:31:28.623214       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:31:28.623249       1 main.go:301] handling current node
	I0120 14:31:38.628221       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:31:38.628261       1 main.go:301] handling current node
	I0120 14:31:48.631464       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:31:48.631559       1 main.go:301] handling current node
	I0120 14:31:58.631421       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:31:58.631457       1 main.go:301] handling current node
	I0120 14:32:08.628299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:32:08.628336       1 main.go:301] handling current node
	I0120 14:32:18.632225       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:32:18.632263       1 main.go:301] handling current node
	I0120 14:32:28.623452       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:32:28.623488       1 main.go:301] handling current node
	I0120 14:32:38.630150       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:32:38.630187       1 main.go:301] handling current node
	I0120 14:32:48.631068       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:32:48.631170       1 main.go:301] handling current node
	I0120 14:32:58.628006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:32:58.628109       1 main.go:301] handling current node
	
	
	==> kindnet [4b0e77b57208af095fe6b1b5e38db68b330ea0d299e73adacebeaead21216c4f] <==
	I0120 14:25:11.129475       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0120 14:25:11.523616       1 controller.go:361] Starting controller kube-network-policies
	I0120 14:25:11.523979       1 controller.go:365] Waiting for informer caches to sync
	I0120 14:25:11.524079       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0120 14:25:11.724462       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0120 14:25:11.724491       1 metrics.go:61] Registering metrics
	I0120 14:25:11.724715       1 controller.go:401] Syncing nftables rules
	I0120 14:25:21.530835       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:25:21.530897       1 main.go:301] handling current node
	I0120 14:25:31.523111       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:25:31.523151       1 main.go:301] handling current node
	I0120 14:25:41.532113       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:25:41.532161       1 main.go:301] handling current node
	I0120 14:25:51.528038       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:25:51.528072       1 main.go:301] handling current node
	I0120 14:26:01.523660       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:26:01.523699       1 main.go:301] handling current node
	I0120 14:26:11.522899       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:26:11.522936       1 main.go:301] handling current node
	I0120 14:26:21.522574       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:26:21.522613       1 main.go:301] handling current node
	I0120 14:26:31.530416       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:26:31.530451       1 main.go:301] handling current node
	I0120 14:26:41.523257       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 14:26:41.523347       1 main.go:301] handling current node
	
	
	==> kube-apiserver [032a69713fb6aca7368581b470ee354fa5307787fb6df5e8868a4dfacb2c6e63] <==
	I0120 14:24:48.407504       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0120 14:24:48.407825       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0120 14:24:48.433081       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0120 14:24:48.437427       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0120 14:24:48.437457       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0120 14:24:48.994160       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 14:24:49.051743       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0120 14:24:49.183703       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0120 14:24:49.185327       1 controller.go:606] quota admission added evaluator for: endpoints
	I0120 14:24:49.195815       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 14:24:50.075246       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0120 14:24:50.665254       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0120 14:24:50.735202       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0120 14:24:59.153925       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0120 14:25:07.535612       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0120 14:25:07.673689       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0120 14:25:11.572350       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:25:11.572480       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:25:11.572560       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 14:25:52.106873       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:25:52.106922       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:25:52.106932       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 14:26:24.548264       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:26:24.548307       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:26:24.548316       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [7cbffdc94e647ea422fdd6fec35fcd0ce91ed50e4fd9166e68f882de804ef30c] <==
	I0120 14:29:51.905864       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:29:51.905899       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 14:30:25.304025       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:30:25.304075       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:30:25.304085       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0120 14:30:27.414181       1 handler_proxy.go:102] no RequestInfo found in the context
	E0120 14:30:27.414283       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0120 14:30:27.414297       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:30:58.441275       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:30:58.441319       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:30:58.441328       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 14:31:29.891590       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:31:29.891647       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:31:29.891655       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 14:32:00.549550       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:32:00.549619       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:32:00.549782       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0120 14:32:24.848487       1 handler_proxy.go:102] no RequestInfo found in the context
	E0120 14:32:24.848574       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0120 14:32:24.848586       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:32:37.968457       1 client.go:360] parsed scheme: "passthrough"
	I0120 14:32:37.968506       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 14:32:37.968516       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [cf07d138214640fe1ae431572612b457891b329aa07cd46878d87267a5706e45] <==
	I0120 14:28:48.130610       1 request.go:655] Throttling request took 1.048278816s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 14:28:48.983404       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:29:15.044263       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:29:20.633782       1 request.go:655] Throttling request took 1.048267223s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 14:29:21.485268       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:29:45.546499       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:29:53.135704       1 request.go:655] Throttling request took 1.048510892s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0120 14:29:53.987253       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:30:16.048578       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:30:25.637696       1 request.go:655] Throttling request took 1.048336549s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0120 14:30:26.489106       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:30:46.550521       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:30:58.139528       1 request.go:655] Throttling request took 1.048395462s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0120 14:30:58.990963       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:31:17.052505       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:31:30.641236       1 request.go:655] Throttling request took 1.048195216s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 14:31:31.492764       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:31:47.554280       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:32:03.143197       1 request.go:655] Throttling request took 1.048115109s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 14:32:03.994652       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:32:18.056238       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:32:35.645208       1 request.go:655] Throttling request took 1.048310736s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 14:32:36.496590       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 14:32:48.558309       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 14:33:08.147329       1 request.go:655] Throttling request took 1.048099289s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	
	
	==> kube-controller-manager [f1fd5c8cbb787f1ed0e7d7d89dd3c534bf9c2338e3ec74bc3814faa75632fbec] <==
	I0120 14:25:07.571452       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0120 14:25:07.571580       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0120 14:25:07.571594       1 shared_informer.go:247] Caches are synced for TTL 
	I0120 14:25:07.581084       1 shared_informer.go:247] Caches are synced for GC 
	I0120 14:25:07.571775       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0120 14:25:07.574121       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0120 14:25:07.577338       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0120 14:25:07.583823       1 shared_informer.go:247] Caches are synced for attach detach 
	I0120 14:25:07.710314       1 shared_informer.go:247] Caches are synced for job 
	I0120 14:25:07.726431       1 shared_informer.go:247] Caches are synced for resource quota 
	I0120 14:25:07.734594       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7z8qd"
	I0120 14:25:07.740645       1 shared_informer.go:247] Caches are synced for namespace 
	I0120 14:25:07.759075       1 shared_informer.go:247] Caches are synced for resource quota 
	I0120 14:25:07.780689       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wrpl6"
	I0120 14:25:07.825118       1 shared_informer.go:247] Caches are synced for service account 
	I0120 14:25:07.908910       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0120 14:25:08.209143       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0120 14:25:08.221149       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0120 14:25:08.221188       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0120 14:25:09.152671       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0120 14:25:09.176917       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-876j9"
	I0120 14:25:12.529155       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0120 14:26:44.025850       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0120 14:26:44.049498       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0120 14:26:44.072901       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	
	
	==> kube-proxy [4161d34b2786916cb0549dcd8de4534dc9db3e777d0982106648472d8f349f25] <==
	I0120 14:25:08.758274       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0120 14:25:08.758397       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0120 14:25:08.787279       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0120 14:25:08.787392       1 server_others.go:185] Using iptables Proxier.
	I0120 14:25:08.787612       1 server.go:650] Version: v1.20.0
	I0120 14:25:08.788119       1 config.go:315] Starting service config controller
	I0120 14:25:08.788137       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0120 14:25:08.790184       1 config.go:224] Starting endpoint slice config controller
	I0120 14:25:08.790198       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0120 14:25:08.888261       1 shared_informer.go:247] Caches are synced for service config 
	I0120 14:25:08.890488       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [980a43350398110b412c3f6e59efeda614541f09e3506488a7ff4895d6b36e7d] <==
	I0120 14:27:27.477506       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0120 14:27:27.477659       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0120 14:27:27.503875       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0120 14:27:27.504148       1 server_others.go:185] Using iptables Proxier.
	I0120 14:27:27.504508       1 server.go:650] Version: v1.20.0
	I0120 14:27:27.507857       1 config.go:315] Starting service config controller
	I0120 14:27:27.578106       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0120 14:27:27.508046       1 config.go:224] Starting endpoint slice config controller
	I0120 14:27:27.578144       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0120 14:27:27.678336       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0120 14:27:27.678409       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [901324074aae31766aa341eb4a69406d14d2ede7b884894ff9c7b5db6181ab9f] <==
	I0120 14:27:16.246703       1 serving.go:331] Generated self-signed cert in-memory
	W0120 14:27:23.283560       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 14:27:23.284603       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 14:27:23.284698       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 14:27:23.284774       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 14:27:23.744988       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 14:27:23.745017       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 14:27:23.752790       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0120 14:27:23.752895       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	E0120 14:27:23.839998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 14:27:23.840103       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 14:27:23.840166       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 14:27:23.840221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 14:27:23.840277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 14:27:23.840331       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 14:27:23.840385       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 14:27:23.842877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 14:27:23.856564       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 14:27:23.856607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:27:23.856653       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 14:27:24.006024       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0120 14:27:25.046506       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [a38942066106cd75e049e0d132b2866b6292b003a66bbba6c8797d90c2c2c071] <==
	W0120 14:24:47.472770       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 14:24:47.472873       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 14:24:47.542200       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0120 14:24:47.542703       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 14:24:47.542713       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 14:24:47.542731       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0120 14:24:47.575533       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 14:24:47.575871       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 14:24:47.575970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 14:24:47.576088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 14:24:47.576164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 14:24:47.576233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:24:47.576297       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 14:24:47.576423       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 14:24:47.576502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 14:24:47.576645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 14:24:47.576670       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 14:24:47.576746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 14:24:48.466167       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:24:48.481444       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 14:24:48.595638       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 14:24:48.640072       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 14:24:48.692478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 14:24:48.693701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0120 14:24:50.842835       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 20 14:31:22 old-k8s-version-140749 kubelet[663]: E0120 14:31:22.943814     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:31:33 old-k8s-version-140749 kubelet[663]: E0120 14:31:33.943249     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: I0120 14:31:34.943776     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:31:34 old-k8s-version-140749 kubelet[663]: E0120 14:31:34.945214     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:31:46 old-k8s-version-140749 kubelet[663]: E0120 14:31:46.943127     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: I0120 14:31:48.942536     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:31:48 old-k8s-version-140749 kubelet[663]: E0120 14:31:48.943309     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:31:59 old-k8s-version-140749 kubelet[663]: E0120 14:31:59.943359     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: I0120 14:32:00.942559     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:32:00 old-k8s-version-140749 kubelet[663]: E0120 14:32:00.943110     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: I0120 14:32:11.942457     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.942810     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:32:11 old-k8s-version-140749 kubelet[663]: E0120 14:32:11.943763     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:32:23 old-k8s-version-140749 kubelet[663]: E0120 14:32:23.943210     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: I0120 14:32:26.942542     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:32:26 old-k8s-version-140749 kubelet[663]: E0120 14:32:26.942877     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:32:36 old-k8s-version-140749 kubelet[663]: E0120 14:32:36.943299     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: I0120 14:32:40.942757     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:32:40 old-k8s-version-140749 kubelet[663]: E0120 14:32:40.946352     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: I0120 14:32:51.942455     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.942791     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	Jan 20 14:32:51 old-k8s-version-140749 kubelet[663]: E0120 14:32:51.943917     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:33:02 old-k8s-version-140749 kubelet[663]: E0120 14:33:02.946128     663 pod_workers.go:191] Error syncing pod 262a8464-c4b2-490b-9a78-73cf76395e5f ("metrics-server-9975d5f86-lfq2q_kube-system(262a8464-c4b2-490b-9a78-73cf76395e5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 14:33:04 old-k8s-version-140749 kubelet[663]: I0120 14:33:04.946347     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 7695074e176e0aa0467dc7d9119d1d2b28238374d80ac1143b5465ed32169370
	Jan 20 14:33:04 old-k8s-version-140749 kubelet[663]: E0120 14:33:04.946800     663 pod_workers.go:191] Error syncing pod a4d7541c-7752-4f39-a042-599729c7584f ("dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-glscn_kubernetes-dashboard(a4d7541c-7752-4f39-a042-599729c7584f)"
	
	
	==> kubernetes-dashboard [c1745625d0923edd61a777a947e198c2e1c1c0281cfe51bed7ad852f109838e6] <==
	2025/01/20 14:27:49 Using namespace: kubernetes-dashboard
	2025/01/20 14:27:49 Using in-cluster config to connect to apiserver
	2025/01/20 14:27:49 Using secret token for csrf signing
	2025/01/20 14:27:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/20 14:27:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/20 14:27:49 Successful initial request to the apiserver, version: v1.20.0
	2025/01/20 14:27:49 Generating JWE encryption key
	2025/01/20 14:27:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/20 14:27:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/20 14:27:51 Initializing JWE encryption key from synchronized object
	2025/01/20 14:27:51 Creating in-cluster Sidecar client
	2025/01/20 14:27:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:27:51 Serving insecurely on HTTP port: 9090
	2025/01/20 14:28:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:28:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:29:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:29:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:30:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:30:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:31:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:31:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:32:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:32:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:27:49 Starting overwatch
	
	
	==> storage-provisioner [0731b37e3a8d567bd12996640c15620a65ddeff3f29c0c8e2fdaa8048ac1f233] <==
	I0120 14:28:10.078388       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 14:28:10.094148       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 14:28:10.094197       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 14:28:27.551673       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 14:28:27.551733       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8983bdf6-654b-4c06-b3a1-5b8a77c8aef3", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-140749_b6684aa4-3b7c-452e-bbf2-a4a424253972 became leader
	I0120 14:28:27.552112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-140749_b6684aa4-3b7c-452e-bbf2-a4a424253972!
	I0120 14:28:27.654331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-140749_b6684aa4-3b7c-452e-bbf2-a4a424253972!
	
	
	==> storage-provisioner [46dbef2bf421ef40db591cf00c60f6db6ba3f90d96107e17d2a9099557efdcfa] <==
	I0120 14:27:27.399505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0120 14:27:57.401978       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140749 -n old-k8s-version-140749
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-140749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-lfq2q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-140749 describe pod metrics-server-9975d5f86-lfq2q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-140749 describe pod metrics-server-9975d5f86-lfq2q: exit status 1 (96.441217ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-lfq2q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-140749 describe pod metrics-server-9975d5f86-lfq2q: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.97s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.97
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 6.12
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.09
18 TestDownloadOnly/v1.32.0/DeleteAll 0.21
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 268.47
29 TestAddons/serial/Volcano 40.89
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.86
35 TestAddons/parallel/Registry 15.2
36 TestAddons/parallel/Ingress 20.06
37 TestAddons/parallel/InspektorGadget 10.82
38 TestAddons/parallel/MetricsServer 5.87
40 TestAddons/parallel/CSI 62.66
41 TestAddons/parallel/Headlamp 16.92
42 TestAddons/parallel/CloudSpanner 6.89
43 TestAddons/parallel/LocalPath 51.94
44 TestAddons/parallel/NvidiaDevicePlugin 5.63
45 TestAddons/parallel/Yakd 11.95
47 TestAddons/StoppedEnableDisable 12.26
48 TestCertOptions 41.46
49 TestCertExpiration 227.95
51 TestForceSystemdFlag 40.3
52 TestForceSystemdEnv 35.12
53 TestDockerEnvContainerd 44.9
58 TestErrorSpam/setup 29.84
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.3
61 TestErrorSpam/pause 2
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 2.07
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.32
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.93
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.06
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
80 TestFunctional/serial/CacheCmd/cache/delete 0.15
81 TestFunctional/serial/MinikubeKubectlCmd 0.19
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 39.92
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.77
86 TestFunctional/serial/LogsFileCmd 1.76
87 TestFunctional/serial/InvalidService 4.26
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 7.67
91 TestFunctional/parallel/DryRun 0.54
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.06
97 TestFunctional/parallel/ServiceCmdConnect 12.65
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 25.82
101 TestFunctional/parallel/SSHCmd 0.67
102 TestFunctional/parallel/CpCmd 2.02
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.18
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
113 TestFunctional/parallel/License 0.34
114 TestFunctional/parallel/Version/short 0.15
115 TestFunctional/parallel/Version/components 1.49
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.87
121 TestFunctional/parallel/ImageCommands/Setup 0.77
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.5
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
129 TestFunctional/parallel/ProfileCmd/profile_list 0.52
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
147 TestFunctional/parallel/ServiceCmd/List 0.64
148 TestFunctional/parallel/MountCmd/any-port 8.37
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
151 TestFunctional/parallel/ServiceCmd/Format 0.46
152 TestFunctional/parallel/ServiceCmd/URL 0.49
153 TestFunctional/parallel/MountCmd/specific-port 2.39
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 126.38
162 TestMultiControlPlane/serial/DeployApp 31.56
163 TestMultiControlPlane/serial/PingHostFromPods 1.69
164 TestMultiControlPlane/serial/AddWorkerNode 25.42
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.98
167 TestMultiControlPlane/serial/CopyFile 19.34
168 TestMultiControlPlane/serial/StopSecondaryNode 12.86
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
170 TestMultiControlPlane/serial/RestartSecondaryNode 19.01
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 125.94
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.72
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
175 TestMultiControlPlane/serial/StopCluster 35.9
176 TestMultiControlPlane/serial/RestartCluster 79.72
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
178 TestMultiControlPlane/serial/AddSecondaryNode 45.75
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
183 TestJSONOutput/start/Command 56.48
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.8
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 38.14
209 TestKicCustomNetwork/use_default_bridge_network 32.53
210 TestKicExistingNetwork 30.95
211 TestKicCustomSubnet 39.41
212 TestKicStaticIP 35.74
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 65.61
217 TestMountStart/serial/StartWithMountFirst 8.87
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 8.7
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.19
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 109.34
229 TestMultiNode/serial/DeployApp2Nodes 15.34
230 TestMultiNode/serial/PingHostFrom2Pods 1.03
231 TestMultiNode/serial/AddNode 16.73
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.68
234 TestMultiNode/serial/CopyFile 10.14
235 TestMultiNode/serial/StopNode 2.25
236 TestMultiNode/serial/StartAfterStop 10.19
237 TestMultiNode/serial/RestartKeepsNodes 89.18
238 TestMultiNode/serial/DeleteNode 5.3
239 TestMultiNode/serial/StopMultiNode 23.95
240 TestMultiNode/serial/RestartMultiNode 54.22
241 TestMultiNode/serial/ValidateNameConflict 36.03
246 TestPreload 126.52
248 TestScheduledStopUnix 109.82
251 TestInsufficientStorage 13.19
252 TestRunningBinaryUpgrade 94.77
254 TestKubernetesUpgrade 108.53
255 TestMissingContainerUpgrade 184.29
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 39.4
259 TestNoKubernetes/serial/StartWithStopK8s 8.46
260 TestNoKubernetes/serial/Start 9.57
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
262 TestNoKubernetes/serial/ProfileList 1.02
263 TestNoKubernetes/serial/Stop 1.2
264 TestNoKubernetes/serial/StartNoArgs 6.34
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
266 TestStoppedBinaryUpgrade/Setup 0.74
267 TestStoppedBinaryUpgrade/Upgrade 119.62
276 TestPause/serial/Start 103.33
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.5
285 TestNetworkPlugins/group/false 3.89
289 TestPause/serial/SecondStartNoReconfiguration 8.28
290 TestPause/serial/Pause 0.88
291 TestPause/serial/VerifyStatus 0.44
292 TestPause/serial/Unpause 0.95
293 TestPause/serial/PauseAgain 1.01
294 TestPause/serial/DeletePaused 3.15
295 TestPause/serial/VerifyDeletedResources 0.45
297 TestStartStop/group/old-k8s-version/serial/FirstStart 147.86
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
300 TestStartStop/group/old-k8s-version/serial/Stop 12.07
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
304 TestStartStop/group/no-preload/serial/FirstStart 77.08
305 TestStartStop/group/no-preload/serial/DeployApp 8.39
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
307 TestStartStop/group/no-preload/serial/Stop 12.11
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 267.66
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
316 TestStartStop/group/no-preload/serial/Pause 4.16
317 TestStartStop/group/old-k8s-version/serial/Pause 4.25
319 TestStartStop/group/embed-certs/serial/FirstStart 96.14
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.34
322 TestStartStop/group/embed-certs/serial/DeployApp 9.36
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
325 TestStartStop/group/embed-certs/serial/Stop 12.26
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/embed-certs/serial/SecondStart 269.54
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 301.36
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
335 TestStartStop/group/embed-certs/serial/Pause 3.29
337 TestStartStop/group/newest-cni/serial/FirstStart 36.81
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.88
342 TestNetworkPlugins/group/auto/Start 87.73
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.86
345 TestStartStop/group/newest-cni/serial/Stop 3.68
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
347 TestStartStop/group/newest-cni/serial/SecondStart 25.68
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
351 TestStartStop/group/newest-cni/serial/Pause 3.83
352 TestNetworkPlugins/group/kindnet/Start 53.48
353 TestNetworkPlugins/group/auto/KubeletFlags 0.29
354 TestNetworkPlugins/group/auto/NetCatPod 9.29
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/auto/DNS 0.21
357 TestNetworkPlugins/group/auto/Localhost 0.15
358 TestNetworkPlugins/group/auto/HairPin 0.15
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
360 TestNetworkPlugins/group/kindnet/NetCatPod 8.28
361 TestNetworkPlugins/group/kindnet/DNS 0.4
362 TestNetworkPlugins/group/kindnet/Localhost 0.21
363 TestNetworkPlugins/group/kindnet/HairPin 0.2
364 TestNetworkPlugins/group/calico/Start 77.95
365 TestNetworkPlugins/group/custom-flannel/Start 60.85
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/custom-flannel/DNS 0.26
370 TestNetworkPlugins/group/calico/KubeletFlags 0.42
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
372 TestNetworkPlugins/group/calico/NetCatPod 10.33
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
374 TestNetworkPlugins/group/calico/DNS 0.24
375 TestNetworkPlugins/group/calico/Localhost 0.18
376 TestNetworkPlugins/group/calico/HairPin 0.27
377 TestNetworkPlugins/group/enable-default-cni/Start 50.07
378 TestNetworkPlugins/group/flannel/Start 60.09
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.4
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
386 TestNetworkPlugins/group/flannel/NetCatPod 10.45
387 TestNetworkPlugins/group/bridge/Start 80.76
388 TestNetworkPlugins/group/flannel/DNS 0.55
389 TestNetworkPlugins/group/flannel/Localhost 0.44
390 TestNetworkPlugins/group/flannel/HairPin 0.23
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
392 TestNetworkPlugins/group/bridge/NetCatPod 8.27
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.14
395 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (6.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-189857 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-189857 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.964547283s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 13:39:04.017813  747256 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 13:39:04.017929  747256 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-189857
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-189857: exit status 85 (104.464333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-189857 | jenkins | v1.35.0 | 20 Jan 25 13:38 UTC |          |
	|         | -p download-only-189857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 13:38:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 13:38:57.106674  747262 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:38:57.106841  747262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:38:57.106864  747262 out.go:358] Setting ErrFile to fd 2...
	I0120 13:38:57.106877  747262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:38:57.107630  747262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	W0120 13:38:57.107793  747262 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20242-741865/.minikube/config/config.json: open /home/jenkins/minikube-integration/20242-741865/.minikube/config/config.json: no such file or directory
	I0120 13:38:57.108227  747262 out.go:352] Setting JSON to true
	I0120 13:38:57.109061  747262 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12082,"bootTime":1737368255,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 13:38:57.109196  747262 start.go:139] virtualization:  
	I0120 13:38:57.113459  747262 out.go:97] [download-only-189857] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0120 13:38:57.113647  747262 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 13:38:57.113751  747262 notify.go:220] Checking for updates...
	I0120 13:38:57.117239  747262 out.go:169] MINIKUBE_LOCATION=20242
	I0120 13:38:57.120323  747262 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:38:57.123206  747262 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 13:38:57.126176  747262 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 13:38:57.129247  747262 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 13:38:57.134820  747262 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 13:38:57.135144  747262 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:38:57.170153  747262 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 13:38:57.170253  747262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:38:57.235923  747262 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 13:38:57.212991136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:38:57.236042  747262 docker.go:318] overlay module found
	I0120 13:38:57.239024  747262 out.go:97] Using the docker driver based on user configuration
	I0120 13:38:57.239055  747262 start.go:297] selected driver: docker
	I0120 13:38:57.239063  747262 start.go:901] validating driver "docker" against <nil>
	I0120 13:38:57.239174  747262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:38:57.288876  747262 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 13:38:57.280434661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:38:57.289087  747262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 13:38:57.289386  747262 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 13:38:57.289552  747262 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 13:38:57.292866  747262 out.go:169] Using Docker driver with root privileges
	I0120 13:38:57.295599  747262 cni.go:84] Creating CNI manager for ""
	I0120 13:38:57.295659  747262 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 13:38:57.295681  747262 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 13:38:57.295773  747262 start.go:340] cluster config:
	{Name:download-only-189857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-189857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:38:57.298725  747262 out.go:97] Starting "download-only-189857" primary control-plane node in "download-only-189857" cluster
	I0120 13:38:57.298749  747262 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 13:38:57.301728  747262 out.go:97] Pulling base image v0.0.46 ...
	I0120 13:38:57.301756  747262 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 13:38:57.301913  747262 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 13:38:57.323800  747262 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 13:38:57.323967  747262 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 13:38:57.324068  747262 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 13:38:57.358366  747262 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 13:38:57.358391  747262 cache.go:56] Caching tarball of preloaded images
	I0120 13:38:57.358558  747262 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 13:38:57.361861  747262 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 13:38:57.361889  747262 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 13:38:57.446246  747262 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 13:39:01.813437  747262 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-189857 host does not exist
	  To start a cluster, run: "minikube start -p download-only-189857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-189857
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (6.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-896170 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-896170 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.117125591s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (6.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 13:39:10.602344  747256 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 13:39:10.602382  747256 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-896170
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-896170: exit status 85 (88.412536ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-189857 | jenkins | v1.35.0 | 20 Jan 25 13:38 UTC |                     |
	|         | -p download-only-189857        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 13:39 UTC | 20 Jan 25 13:39 UTC |
	| delete  | -p download-only-189857        | download-only-189857 | jenkins | v1.35.0 | 20 Jan 25 13:39 UTC | 20 Jan 25 13:39 UTC |
	| start   | -o=json --download-only        | download-only-896170 | jenkins | v1.35.0 | 20 Jan 25 13:39 UTC |                     |
	|         | -p download-only-896170        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 13:39:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 13:39:04.532906  747463 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:39:04.533105  747463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:39:04.533120  747463 out.go:358] Setting ErrFile to fd 2...
	I0120 13:39:04.533126  747463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:39:04.533403  747463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 13:39:04.533864  747463 out.go:352] Setting JSON to true
	I0120 13:39:04.534726  747463 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12090,"bootTime":1737368255,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 13:39:04.534801  747463 start.go:139] virtualization:  
	I0120 13:39:04.538544  747463 out.go:97] [download-only-896170] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 13:39:04.538798  747463 notify.go:220] Checking for updates...
	I0120 13:39:04.541752  747463 out.go:169] MINIKUBE_LOCATION=20242
	I0120 13:39:04.544839  747463 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:39:04.547814  747463 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 13:39:04.550761  747463 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 13:39:04.553783  747463 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 13:39:04.559504  747463 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 13:39:04.559779  747463 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:39:04.587234  747463 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 13:39:04.587352  747463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:39:04.643872  747463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 13:39:04.634454573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:39:04.644012  747463 docker.go:318] overlay module found
	I0120 13:39:04.646965  747463 out.go:97] Using the docker driver based on user configuration
	I0120 13:39:04.646998  747463 start.go:297] selected driver: docker
	I0120 13:39:04.647006  747463 start.go:901] validating driver "docker" against <nil>
	I0120 13:39:04.647132  747463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:39:04.697692  747463 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 13:39:04.689200596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:39:04.697906  747463 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 13:39:04.698210  747463 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 13:39:04.698369  747463 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 13:39:04.701474  747463 out.go:169] Using Docker driver with root privileges
	I0120 13:39:04.704174  747463 cni.go:84] Creating CNI manager for ""
	I0120 13:39:04.704237  747463 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 13:39:04.704246  747463 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 13:39:04.704338  747463 start.go:340] cluster config:
	{Name:download-only-896170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:download-only-896170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:39:04.707371  747463 out.go:97] Starting "download-only-896170" primary control-plane node in "download-only-896170" cluster
	I0120 13:39:04.707400  747463 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 13:39:04.710350  747463 out.go:97] Pulling base image v0.0.46 ...
	I0120 13:39:04.710379  747463 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 13:39:04.710561  747463 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 13:39:04.726617  747463 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 13:39:04.726746  747463 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 13:39:04.726772  747463 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0120 13:39:04.726778  747463 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0120 13:39:04.726787  747463 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0120 13:39:04.808369  747463 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I0120 13:39:04.808397  747463 cache.go:56] Caching tarball of preloaded images
	I0120 13:39:04.809415  747463 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 13:39:04.812557  747463 out.go:97] Downloading Kubernetes v1.32.0 preload ...
	I0120 13:39:04.812586  747463 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 13:39:04.909863  747463 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:bf17808bb02e2942f486582f7290de30 -> /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I0120 13:39:08.944190  747463 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 13:39:08.944296  747463 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20242-741865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-896170 host does not exist
	  To start a cluster, run: "minikube start -p download-only-896170"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-896170
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 13:39:11.897728  747256 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-711036 --alsologtostderr --binary-mirror http://127.0.0.1:42931 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-711036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-711036
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-695399
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-695399: exit status 85 (72.175688ms)

                                                
                                                
-- stdout --
	* Profile "addons-695399" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-695399"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-695399
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-695399: exit status 85 (75.27824ms)

                                                
                                                
-- stdout --
	* Profile "addons-695399" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-695399"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (268.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-695399 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-695399 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m28.471602847s)
--- PASS: TestAddons/Setup (268.47s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.89s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 58.161037ms
addons_test.go:815: volcano-admission stabilized in 58.241899ms
addons_test.go:823: volcano-controller stabilized in 59.060774ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-ft9tc" [6e81a240-ee18-4ebd-93fe-1db124a85686] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003725578s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-76g9c" [b31b627c-e100-49b5-b9f4-9b40646dbef3] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003436653s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-jdtwd" [da2fdfcb-857e-49ba-ad8f-3df559bd723a] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004010712s
addons_test.go:842: (dbg) Run:  kubectl --context addons-695399 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-695399 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-695399 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [bd172a8d-5fb4-4288-a076-9f20af5791c4] Pending
helpers_test.go:344: "test-job-nginx-0" [bd172a8d-5fb4-4288-a076-9f20af5791c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [bd172a8d-5fb4-4288-a076-9f20af5791c4] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003740322s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable volcano --alsologtostderr -v=1: (11.259507014s)
--- PASS: TestAddons/serial/Volcano (40.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-695399 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-695399 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-695399 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-695399 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a9b658ae-921d-4afc-9c20-8c5bbb590eac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a9b658ae-921d-4afc-9c20-8c5bbb590eac] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004329358s
addons_test.go:633: (dbg) Run:  kubectl --context addons-695399 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-695399 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-695399 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-695399 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.246733ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-dfrwp" [084bfe48-0dcb-4b21-8d5b-882b297298c5] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003872077s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zzcgf" [8703cdd1-13f9-46a4-9caa-28386ea02d35] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00393586s
addons_test.go:331: (dbg) Run:  kubectl --context addons-695399 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-695399 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-695399 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.151498102s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 ip
2025/01/20 13:44:55 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.20s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-695399 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-695399 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-695399 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [812f3ba9-7a8b-4032-bd35-d83b2090a0c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [812f3ba9-7a8b-4032-bd35-d83b2090a0c7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003164352s
I0120 13:46:14.729934  747256 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-695399 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable ingress-dns --alsologtostderr -v=1: (1.154939979s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable ingress --alsologtostderr -v=1: (7.96690561s)
--- PASS: TestAddons/parallel/Ingress (20.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-txjtn" [f55c9f2e-0687-43da-a17f-a6e2bfabe081] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004346527s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable inspektor-gadget --alsologtostderr -v=1: (5.815193551s)
--- PASS: TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.928847ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-f7qmg" [cd1ce5cd-4ee1-49fe-9056-b6faaf21bfdf] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00463917s
addons_test.go:402: (dbg) Run:  kubectl --context addons-695399 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 13:45:22.616865  747256 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 13:45:22.622522  747256 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 13:45:22.622559  747256 kapi.go:107] duration metric: took 10.082393ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.092748ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-695399 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-695399 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [07bd792c-0edb-493b-8759-7d6e79dc9d37] Pending
helpers_test.go:344: "task-pv-pod" [07bd792c-0edb-493b-8759-7d6e79dc9d37] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [07bd792c-0edb-493b-8759-7d6e79dc9d37] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00520973s
addons_test.go:511: (dbg) Run:  kubectl --context addons-695399 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-695399 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-695399 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-695399 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-695399 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-695399 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-695399 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9fd58735-00fc-463f-b100-e9125cc0fc57] Pending
helpers_test.go:344: "task-pv-pod-restore" [9fd58735-00fc-463f-b100-e9125cc0fc57] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9fd58735-00fc-463f-b100-e9125cc0fc57] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.025382914s
addons_test.go:553: (dbg) Run:  kubectl --context addons-695399 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-695399 delete pod task-pv-pod-restore: (1.487720782s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-695399 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-695399 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable volumesnapshots --alsologtostderr -v=1: (1.207908967s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.778519513s)
--- PASS: TestAddons/parallel/CSI (62.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-695399 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-695399 --alsologtostderr -v=1: (1.104355448s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-q6gs7" [952543ea-b319-42c5-918a-8d4825da255d] Pending
helpers_test.go:344: "headlamp-69d78d796f-q6gs7" [952543ea-b319-42c5-918a-8d4825da255d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-q6gs7" [952543ea-b319-42c5-918a-8d4825da255d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003632729s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable headlamp --alsologtostderr -v=1: (5.810637503s)
--- PASS: TestAddons/parallel/Headlamp (16.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-8hpzj" [ea9e6b19-6e66-4002-93d2-b91b2c36ef0b] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004982011s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.89s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-695399 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-695399 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-695399 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a34a5d47-dfd0-49b8-9dcf-6278a053a7eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a34a5d47-dfd0-49b8-9dcf-6278a053a7eb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a34a5d47-dfd0-49b8-9dcf-6278a053a7eb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004082416s
addons_test.go:906: (dbg) Run:  kubectl --context addons-695399 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 ssh "cat /opt/local-path-provisioner/pvc-5117bfb7-9b67-4a59-8b16-9932ad54613c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-695399 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-695399 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.700073152s)
--- PASS: TestAddons/parallel/LocalPath (51.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dgjbv" [affbb752-30b3-403c-837c-4f24477b3c19] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005094121s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-z7s2g" [4c93508c-834d-4fe9-bdab-86a4f2af3139] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003354588s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-695399 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-695399 addons disable yakd --alsologtostderr -v=1: (5.943378076s)
--- PASS: TestAddons/parallel/Yakd (11.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-695399
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-695399: (11.967509207s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-695399
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-695399
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-695399
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (41.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-968792 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0120 14:23:41.085131  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-968792 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (38.753404874s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-968792 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-968792 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-968792 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-968792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-968792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-968792: (2.034870951s)
--- PASS: TestCertOptions (41.46s)

                                                
                                    
x
+
TestCertExpiration (227.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-857413 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-857413 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.636808736s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-857413 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-857413 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.796239553s)
helpers_test.go:175: Cleaning up "cert-expiration-857413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-857413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-857413: (2.511122987s)
--- PASS: TestCertExpiration (227.95s)

                                                
                                    
x
+
TestForceSystemdFlag (40.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-573250 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-573250 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.948478431s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-573250 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-573250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-573250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-573250: (2.045416386s)
--- PASS: TestForceSystemdFlag (40.30s)

                                                
                                    
x
+
TestForceSystemdEnv (35.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-071479 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-071479 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.397705891s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-071479 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-071479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-071479
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-071479: (2.25813618s)
--- PASS: TestForceSystemdEnv (35.12s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.9s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-192295 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-192295 --driver=docker  --container-runtime=containerd: (29.015632879s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-192295"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-BPWk55qOkKWp/agent.769450" SSH_AGENT_PID="769451" DOCKER_HOST=ssh://docker@127.0.0.1:33544 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-BPWk55qOkKWp/agent.769450" SSH_AGENT_PID="769451" DOCKER_HOST=ssh://docker@127.0.0.1:33544 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-BPWk55qOkKWp/agent.769450" SSH_AGENT_PID="769451" DOCKER_HOST=ssh://docker@127.0.0.1:33544 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.217491815s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-BPWk55qOkKWp/agent.769450" SSH_AGENT_PID="769451" DOCKER_HOST=ssh://docker@127.0.0.1:33544 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-192295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-192295
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-192295: (2.312536698s)
--- PASS: TestDockerEnvContainerd (44.90s)

                                                
                                    
x
+
TestErrorSpam/setup (29.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-514503 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-514503 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-514503 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-514503 --driver=docker  --container-runtime=containerd: (29.83532728s)
--- PASS: TestErrorSpam/setup (29.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 status
--- PASS: TestErrorSpam/status (1.30s)

                                                
                                    
x
+
TestErrorSpam/pause (2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 pause
--- PASS: TestErrorSpam/pause (2.00s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (2.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 stop: (1.848633603s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514503 --log_dir /tmp/nospam-514503 stop
--- PASS: TestErrorSpam/stop (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20242-741865/.minikube/files/etc/test/nested/copy/747256/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-563229 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0120 13:48:41.085876  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.092208  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.103593  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.124952  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.166307  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.247627  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.409083  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:41.730660  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:42.372783  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:43.654374  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:46.216559  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:48:51.338487  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:49:01.580787  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:49:22.062535  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-563229 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m20.322849016s)
--- PASS: TestFunctional/serial/StartWithProxy (80.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 13:49:30.999281  747256 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-563229 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-563229 --alsologtostderr -v=8: (5.930351308s)
functional_test.go:663: soft start took 5.931721103s for "functional-563229" cluster.
I0120 13:49:36.929965  747256 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (5.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-563229 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 cache add registry.k8s.io/pause:3.1: (1.515259262s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 cache add registry.k8s.io/pause:3.3: (1.337840773s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 cache add registry.k8s.io/pause:latest: (1.207651763s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-563229 /tmp/TestFunctionalserialCacheCmdcacheadd_local1315748023/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cache add minikube-local-cache-test:functional-563229
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cache delete minikube-local-cache-test:functional-563229
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-563229
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.417495ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 cache reload: (1.195473147s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 kubectl -- --context functional-563229 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-563229 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-563229 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0120 13:50:03.025799  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-563229 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.923234363s)
functional_test.go:761: restart took 39.92332649s for "functional-563229" cluster.
I0120 13:50:25.465260  747256 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (39.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-563229 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 logs: (1.772060922s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 logs --file /tmp/TestFunctionalserialLogsFileCmd2364554730/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 logs --file /tmp/TestFunctionalserialLogsFileCmd2364554730/001/logs.txt: (1.758838019s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-563229 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-563229
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-563229: exit status 115 (430.403028ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30516 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-563229 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 config get cpus: exit status 14 (86.058055ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 config get cpus: exit status 14 (75.714271ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-563229 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-563229 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 785893: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-563229 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-563229 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (198.096906ms)

                                                
                                                
-- stdout --
	* [functional-563229] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:51:08.507505  784613 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:51:08.507741  784613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:51:08.507772  784613 out.go:358] Setting ErrFile to fd 2...
	I0120 13:51:08.507796  784613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:51:08.508064  784613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 13:51:08.508451  784613 out.go:352] Setting JSON to false
	I0120 13:51:08.509449  784613 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12814,"bootTime":1737368255,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 13:51:08.509553  784613 start.go:139] virtualization:  
	I0120 13:51:08.513878  784613 out.go:177] * [functional-563229] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 13:51:08.517616  784613 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:51:08.517793  784613 notify.go:220] Checking for updates...
	I0120 13:51:08.523381  784613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:51:08.526242  784613 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 13:51:08.529182  784613 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 13:51:08.532017  784613 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 13:51:08.534940  784613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:51:08.538248  784613 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:51:08.538779  784613 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:51:08.575651  784613 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 13:51:08.575770  784613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:51:08.632800  784613 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 13:51:08.622779771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:51:08.632923  784613 docker.go:318] overlay module found
	I0120 13:51:08.637824  784613 out.go:177] * Using the docker driver based on existing profile
	I0120 13:51:08.640653  784613 start.go:297] selected driver: docker
	I0120 13:51:08.640674  784613 start.go:901] validating driver "docker" against &{Name:functional-563229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-563229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:51:08.640796  784613 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:51:08.644353  784613 out.go:201] 
	W0120 13:51:08.647642  784613 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 13:51:08.650649  784613 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-563229 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-563229 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-563229 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.074036ms)

                                                
                                                
-- stdout --
	* [functional-563229] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:51:12.479417  785660 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:51:12.479659  785660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:51:12.479691  785660 out.go:358] Setting ErrFile to fd 2...
	I0120 13:51:12.479729  785660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:51:12.480721  785660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 13:51:12.482376  785660 out.go:352] Setting JSON to false
	I0120 13:51:12.483425  785660 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12818,"bootTime":1737368255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 13:51:12.483534  785660 start.go:139] virtualization:  
	I0120 13:51:12.485245  785660 out.go:177] * [functional-563229] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0120 13:51:12.486847  785660 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:51:12.487001  785660 notify.go:220] Checking for updates...
	I0120 13:51:12.489724  785660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:51:12.492030  785660 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 13:51:12.494027  785660 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 13:51:12.495949  785660 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 13:51:12.497990  785660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:51:12.500829  785660 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:51:12.502477  785660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:51:12.532285  785660 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 13:51:12.532410  785660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:51:12.593708  785660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 13:51:12.58419893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:51:12.593820  785660 docker.go:318] overlay module found
	I0120 13:51:12.596559  785660 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0120 13:51:12.599076  785660 start.go:297] selected driver: docker
	I0120 13:51:12.599100  785660 start.go:901] validating driver "docker" against &{Name:functional-563229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-563229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:51:12.599240  785660 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:51:12.602491  785660 out.go:201] 
	W0120 13:51:12.605161  785660 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 13:51:12.607828  785660 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-563229 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-563229 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-n6x76" [cd733605-c851-492f-a5b3-dc61329d67d1] Pending
helpers_test.go:344: "hello-node-connect-8449669db6-n6x76" [cd733605-c851-492f-a5b3-dc61329d67d1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-n6x76" [cd733605-c851-492f-a5b3-dc61329d67d1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004862147s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31842
functional_test.go:1675: http://192.168.49.2:31842: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-n6x76

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31842
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [41f6053b-714e-4986-9231-475e96264d24] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004591697s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-563229 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-563229 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-563229 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-563229 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e7bb6eb-7f1a-47a6-bc8c-d50aea89af17] Pending
helpers_test.go:344: "sp-pod" [3e7bb6eb-7f1a-47a6-bc8c-d50aea89af17] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3e7bb6eb-7f1a-47a6-bc8c-d50aea89af17] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004337683s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-563229 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-563229 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-563229 delete -f testdata/storage-provisioner/pod.yaml: (1.822306246s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-563229 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6da3d258-bf13-4402-93eb-844b8be46384] Pending
helpers_test.go:344: "sp-pod" [6da3d258-bf13-4402-93eb-844b8be46384] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003927197s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-563229 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh -n functional-563229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cp functional-563229:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1295729683/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh -n functional-563229 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh -n functional-563229 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/747256/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /etc/test/nested/copy/747256/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/747256.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /etc/ssl/certs/747256.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/747256.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /usr/share/ca-certificates/747256.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7472562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /etc/ssl/certs/7472562.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7472562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /usr/share/ca-certificates/7472562.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-563229 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh "sudo systemctl is-active docker": exit status 1 (324.907119ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh "sudo systemctl is-active crio": exit status 1 (343.3254ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 version -o=json --components: (1.488909284s)
--- PASS: TestFunctional/parallel/Version/components (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-563229 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-563229
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-563229
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-563229 image ls --format short --alsologtostderr:
I0120 13:51:21.985155  787330 out.go:345] Setting OutFile to fd 1 ...
I0120 13:51:21.985383  787330 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:21.985407  787330 out.go:358] Setting ErrFile to fd 2...
I0120 13:51:21.985432  787330 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:21.985707  787330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 13:51:21.986355  787330 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:21.986517  787330 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:21.988107  787330 cli_runner.go:164] Run: docker container inspect functional-563229 --format={{.State.Status}}
I0120 13:51:22.016010  787330 ssh_runner.go:195] Run: systemctl --version
I0120 13:51:22.016081  787330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563229
I0120 13:51:22.042886  787330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/functional-563229/id_rsa Username:docker}
I0120 13:51:22.134675  787330 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-563229 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-563229  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:f9d642 | 21.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| docker.io/library/minikube-local-cache-test | functional-563229  | sha256:5b04d7 | 990B   |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/kube-controller-manager     | v1.32.0            | sha256:a8d049 | 24MB   |
| registry.k8s.io/kube-proxy                  | v1.32.0            | sha256:2f5038 | 27.4MB |
| docker.io/library/nginx                     | latest             | sha256:781d90 | 68.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-apiserver              | v1.32.0            | sha256:2b5bd0 | 26.2MB |
| registry.k8s.io/kube-scheduler              | v1.32.0            | sha256:c3ff26 | 18.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-563229 image ls --format table --alsologtostderr:
I0120 13:51:22.856131  787550 out.go:345] Setting OutFile to fd 1 ...
I0120 13:51:22.856315  787550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.856341  787550 out.go:358] Setting ErrFile to fd 2...
I0120 13:51:22.856374  787550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.856731  787550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 13:51:22.858087  787550 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.858278  787550 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.858840  787550 cli_runner.go:164] Run: docker container inspect functional-563229 --format={{.State.Status}}
I0120 13:51:22.877650  787550 ssh_runner.go:195] Run: systemctl --version
I0120 13:51:22.877709  787550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563229
I0120 13:51:22.895976  787550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/functional-563229/id_rsa Username:docker}
I0120 13:51:22.987016  787550 ssh_runner.go:195] Run: sudo crictl images --output json
E0120 13:51:24.947163  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-563229 image ls --format json --alsologtostderr:
[{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7eca
c57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:5b04d72ef9b5924d0bd62876941e10705ee30b82b09acebaaa99a459a56739cf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-563229"],"size":"990"},{"id":"sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21565101"},{"id":"sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"68507108"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests"
:[],"repoTags":["docker.io/kicbase/echo-server:functional-563229"],"size":"2173567"},{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"23964889"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha2
56:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc"
,"repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"26213662"},{"id":"sha256:2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"27362084"},{"id":"sha256:c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"18922208"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-563229 image ls --format json --alsologtostderr:
I0120 13:51:22.596691  787477 out.go:345] Setting OutFile to fd 1 ...
I0120 13:51:22.596833  787477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.596856  787477 out.go:358] Setting ErrFile to fd 2...
I0120 13:51:22.596862  787477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.597187  787477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 13:51:22.598175  787477 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.598319  787477 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.598981  787477 cli_runner.go:164] Run: docker container inspect functional-563229 --format={{.State.Status}}
I0120 13:51:22.619533  787477 ssh_runner.go:195] Run: systemctl --version
I0120 13:51:22.619600  787477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563229
I0120 13:51:22.647063  787477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/functional-563229/id_rsa Username:docker}
I0120 13:51:22.738553  787477 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-563229 image ls --format yaml --alsologtostderr:
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "26213662"
- id: sha256:a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "23964889"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-563229
size: "2173567"
- id: sha256:5b04d72ef9b5924d0bd62876941e10705ee30b82b09acebaaa99a459a56739cf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-563229
size: "990"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "27362084"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "21565101"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "68507108"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "18922208"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-563229 image ls --format yaml --alsologtostderr:
I0120 13:51:22.263947  787419 out.go:345] Setting OutFile to fd 1 ...
I0120 13:51:22.264119  787419 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.264131  787419 out.go:358] Setting ErrFile to fd 2...
I0120 13:51:22.264137  787419 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.264392  787419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 13:51:22.269934  787419 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.270208  787419 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.271374  787419 cli_runner.go:164] Run: docker container inspect functional-563229 --format={{.State.Status}}
I0120 13:51:22.293168  787419 ssh_runner.go:195] Run: systemctl --version
I0120 13:51:22.293321  787419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563229
I0120 13:51:22.337167  787419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/functional-563229/id_rsa Username:docker}
I0120 13:51:22.431641  787419 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh pgrep buildkitd: exit status 1 (363.977996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image build -t localhost/my-image:functional-563229 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 image build -t localhost/my-image:functional-563229 testdata/build --alsologtostderr: (3.268702365s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-563229 image build -t localhost/my-image:functional-563229 testdata/build --alsologtostderr:
I0120 13:51:22.364444  787435 out.go:345] Setting OutFile to fd 1 ...
I0120 13:51:22.365142  787435 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.365159  787435 out.go:358] Setting ErrFile to fd 2...
I0120 13:51:22.365164  787435 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:51:22.365423  787435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
I0120 13:51:22.369855  787435 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.371264  787435 config.go:182] Loaded profile config "functional-563229": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 13:51:22.371786  787435 cli_runner.go:164] Run: docker container inspect functional-563229 --format={{.State.Status}}
I0120 13:51:22.391643  787435 ssh_runner.go:195] Run: systemctl --version
I0120 13:51:22.391698  787435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-563229
I0120 13:51:22.410191  787435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/functional-563229/id_rsa Username:docker}
I0120 13:51:22.498961  787435 build_images.go:161] Building image from path: /tmp/build.3886048910.tar
I0120 13:51:22.499038  787435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 13:51:22.509121  787435 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3886048910.tar
I0120 13:51:22.522919  787435 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3886048910.tar: stat -c "%s %y" /var/lib/minikube/build/build.3886048910.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3886048910.tar': No such file or directory
I0120 13:51:22.522951  787435 ssh_runner.go:362] scp /tmp/build.3886048910.tar --> /var/lib/minikube/build/build.3886048910.tar (3072 bytes)
I0120 13:51:22.552127  787435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3886048910
I0120 13:51:22.561281  787435 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3886048910 -xf /var/lib/minikube/build/build.3886048910.tar
I0120 13:51:22.573340  787435 containerd.go:394] Building image: /var/lib/minikube/build/build.3886048910
I0120 13:51:22.573419  787435 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3886048910 --local dockerfile=/var/lib/minikube/build/build.3886048910 --output type=image,name=localhost/my-image:functional-563229
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d76a6ba0db8d4bbb9cae23970e28909c77ae31a1d6f36865a811f99b7660e8cc
#8 exporting manifest sha256:d76a6ba0db8d4bbb9cae23970e28909c77ae31a1d6f36865a811f99b7660e8cc 0.0s done
#8 exporting config sha256:f91444f994982df53657d15b5d45a24705bd92e75fb0dbd623222e0d67f55079 0.0s done
#8 naming to localhost/my-image:functional-563229 done
#8 DONE 0.2s
I0120 13:51:25.530418  787435 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3886048910 --local dockerfile=/var/lib/minikube/build/build.3886048910 --output type=image,name=localhost/my-image:functional-563229: (2.956971006s)
I0120 13:51:25.530495  787435 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3886048910
I0120 13:51:25.540369  787435 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3886048910.tar
I0120 13:51:25.549541  787435 build_images.go:217] Built localhost/my-image:functional-563229 from /tmp/build.3886048910.tar
I0120 13:51:25.549571  787435 build_images.go:133] succeeded building to: functional-563229
I0120 13:51:25.549576  787435 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-563229
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image load --daemon kicbase/echo-server:functional-563229 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 image load --daemon kicbase/echo-server:functional-563229 --alsologtostderr: (1.178341949s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image load --daemon kicbase/echo-server:functional-563229 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 image load --daemon kicbase/echo-server:functional-563229 --alsologtostderr: (1.091541056s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-563229
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image load --daemon kicbase/echo-server:functional-563229 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-563229 image load --daemon kicbase/echo-server:functional-563229 --alsologtostderr: (1.070766908s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "427.749559ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "89.282659ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "461.611042ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "78.164662ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image save kicbase/echo-server:functional-563229 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-563229 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-563229 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-563229 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-563229 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 783135: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image rm kicbase/echo-server:functional-563229 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-563229 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-563229 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c605995a-ed80-47d5-85fe-2a7a723728af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c605995a-ed80-47d5-85fe-2a7a723728af] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.007269361s
I0120 13:50:49.406021  747256 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-563229
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 image save --daemon kicbase/echo-server:functional-563229 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-563229
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-563229 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.102.227 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-563229 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-563229 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-563229 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-pb5d6" [dc86fb6b-9ce5-4461-b9af-b3c8a3d55e2c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-pb5d6" [dc86fb6b-9ce5-4461-b9af-b3c8a3d55e2c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004365667s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdany-port2827794848/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737381068998488898" to /tmp/TestFunctionalparallelMountCmdany-port2827794848/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737381068998488898" to /tmp/TestFunctionalparallelMountCmdany-port2827794848/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737381068998488898" to /tmp/TestFunctionalparallelMountCmdany-port2827794848/001/test-1737381068998488898
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.856866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 13:51:09.432391  747256 retry.go:31] will retry after 373.313859ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 13:51 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 13:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 13:51 test-1737381068998488898
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh cat /mount-9p/test-1737381068998488898
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-563229 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d6827aa0-46c0-406b-8846-c63f9c6875cd] Pending
helpers_test.go:344: "busybox-mount" [d6827aa0-46c0-406b-8846-c63f9c6875cd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d6827aa0-46c0-406b-8846-c63f9c6875cd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d6827aa0-46c0-406b-8846-c63f9c6875cd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003397185s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-563229 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdany-port2827794848/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 service list -o json
functional_test.go:1494: Took "579.994909ms" to run "out/minikube-linux-arm64 -p functional-563229 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30634
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30634
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdspecific-port2381241562/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (556.587518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 13:51:17.928107  747256 retry.go:31] will retry after 467.534563ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdspecific-port2381241562/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-563229 ssh "sudo umount -f /mount-9p": exit status 1 (470.488258ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-563229 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdspecific-port2381241562/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393122176/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393122176/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393122176/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T" /mount1
2025/01/20 13:51:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-563229 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-563229 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393122176/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393122176/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-563229 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393122176/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-563229
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-563229
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-563229
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (126.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-191772 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-191772 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m5.522580586s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (126.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- rollout status deployment/busybox
E0120 13:53:41.085471  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-191772 -- rollout status deployment/busybox: (28.610579494s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-2dbql -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-674tb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-68c9m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-2dbql -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-674tb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-68c9m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-2dbql -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-674tb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-68c9m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-2dbql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-2dbql -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-674tb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-674tb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-68c9m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191772 -- exec busybox-58667487b6-68c9m -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-191772 -v=7 --alsologtostderr
E0120 13:54:08.788625  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-191772 -v=7 --alsologtostderr: (24.38214673s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr: (1.040166368s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-191772 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 status --output json -v=7 --alsologtostderr: (1.003289949s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp testdata/cp-test.txt ha-191772:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315296289/001/cp-test_ha-191772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772:/home/docker/cp-test.txt ha-191772-m02:/home/docker/cp-test_ha-191772_ha-191772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test_ha-191772_ha-191772-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772:/home/docker/cp-test.txt ha-191772-m03:/home/docker/cp-test_ha-191772_ha-191772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test_ha-191772_ha-191772-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772:/home/docker/cp-test.txt ha-191772-m04:/home/docker/cp-test_ha-191772_ha-191772-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test_ha-191772_ha-191772-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp testdata/cp-test.txt ha-191772-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315296289/001/cp-test_ha-191772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m02:/home/docker/cp-test.txt ha-191772:/home/docker/cp-test_ha-191772-m02_ha-191772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test_ha-191772-m02_ha-191772.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m02:/home/docker/cp-test.txt ha-191772-m03:/home/docker/cp-test_ha-191772-m02_ha-191772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test_ha-191772-m02_ha-191772-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m02:/home/docker/cp-test.txt ha-191772-m04:/home/docker/cp-test_ha-191772-m02_ha-191772-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test_ha-191772-m02_ha-191772-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp testdata/cp-test.txt ha-191772-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315296289/001/cp-test_ha-191772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m03:/home/docker/cp-test.txt ha-191772:/home/docker/cp-test_ha-191772-m03_ha-191772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test_ha-191772-m03_ha-191772.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m03:/home/docker/cp-test.txt ha-191772-m02:/home/docker/cp-test_ha-191772-m03_ha-191772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test_ha-191772-m03_ha-191772-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m03:/home/docker/cp-test.txt ha-191772-m04:/home/docker/cp-test_ha-191772-m03_ha-191772-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test_ha-191772-m03_ha-191772-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp testdata/cp-test.txt ha-191772-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315296289/001/cp-test_ha-191772-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m04:/home/docker/cp-test.txt ha-191772:/home/docker/cp-test_ha-191772-m04_ha-191772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772 "sudo cat /home/docker/cp-test_ha-191772-m04_ha-191772.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m04:/home/docker/cp-test.txt ha-191772-m02:/home/docker/cp-test_ha-191772-m04_ha-191772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m02 "sudo cat /home/docker/cp-test_ha-191772-m04_ha-191772-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 cp ha-191772-m04:/home/docker/cp-test.txt ha-191772-m03:/home/docker/cp-test_ha-191772-m04_ha-191772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 ssh -n ha-191772-m03 "sudo cat /home/docker/cp-test_ha-191772-m04_ha-191772-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 node stop m02 -v=7 --alsologtostderr: (12.09554501s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr: exit status 7 (759.628151ms)

                                                
                                                
-- stdout --
	ha-191772
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-191772-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-191772-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-191772-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:55:06.200702  803987 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:55:06.200881  803987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:55:06.200891  803987 out.go:358] Setting ErrFile to fd 2...
	I0120 13:55:06.200897  803987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:55:06.201159  803987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 13:55:06.201350  803987 out.go:352] Setting JSON to false
	I0120 13:55:06.201387  803987 mustload.go:65] Loading cluster: ha-191772
	I0120 13:55:06.201488  803987 notify.go:220] Checking for updates...
	I0120 13:55:06.201894  803987 config.go:182] Loaded profile config "ha-191772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:55:06.201918  803987 status.go:174] checking status of ha-191772 ...
	I0120 13:55:06.202485  803987 cli_runner.go:164] Run: docker container inspect ha-191772 --format={{.State.Status}}
	I0120 13:55:06.222588  803987 status.go:371] ha-191772 host status = "Running" (err=<nil>)
	I0120 13:55:06.222613  803987 host.go:66] Checking if "ha-191772" exists ...
	I0120 13:55:06.222929  803987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-191772
	I0120 13:55:06.253007  803987 host.go:66] Checking if "ha-191772" exists ...
	I0120 13:55:06.253334  803987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:55:06.253391  803987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-191772
	I0120 13:55:06.272880  803987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33559 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/ha-191772/id_rsa Username:docker}
	I0120 13:55:06.367219  803987 ssh_runner.go:195] Run: systemctl --version
	I0120 13:55:06.372005  803987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:55:06.390649  803987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 13:55:06.452795  803987 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-20 13:55:06.438939849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 13:55:06.453557  803987 kubeconfig.go:125] found "ha-191772" server: "https://192.168.49.254:8443"
	I0120 13:55:06.453708  803987 api_server.go:166] Checking apiserver status ...
	I0120 13:55:06.453764  803987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:55:06.465372  803987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1520/cgroup
	I0120 13:55:06.474639  803987 api_server.go:182] apiserver freezer: "6:freezer:/docker/d168f7b6ee5a0ddb97f37c51b1bbfc50ca56f0f6c6de030bd9a6a4470bdee9b3/kubepods/burstable/pod0345a02666958a26eb159a92dc1fef2f/115d9cf5e0a3c88aef1f431ad21196f383c5a8f4f2cc362a66a2a6d8a54c8a22"
	I0120 13:55:06.474712  803987 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d168f7b6ee5a0ddb97f37c51b1bbfc50ca56f0f6c6de030bd9a6a4470bdee9b3/kubepods/burstable/pod0345a02666958a26eb159a92dc1fef2f/115d9cf5e0a3c88aef1f431ad21196f383c5a8f4f2cc362a66a2a6d8a54c8a22/freezer.state
	I0120 13:55:06.484275  803987 api_server.go:204] freezer state: "THAWED"
	I0120 13:55:06.484307  803987 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 13:55:06.493193  803987 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 13:55:06.493224  803987 status.go:463] ha-191772 apiserver status = Running (err=<nil>)
	I0120 13:55:06.493245  803987 status.go:176] ha-191772 status: &{Name:ha-191772 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:55:06.493262  803987 status.go:174] checking status of ha-191772-m02 ...
	I0120 13:55:06.493657  803987 cli_runner.go:164] Run: docker container inspect ha-191772-m02 --format={{.State.Status}}
	I0120 13:55:06.518431  803987 status.go:371] ha-191772-m02 host status = "Stopped" (err=<nil>)
	I0120 13:55:06.518452  803987 status.go:384] host is not running, skipping remaining checks
	I0120 13:55:06.518474  803987 status.go:176] ha-191772-m02 status: &{Name:ha-191772-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:55:06.518494  803987 status.go:174] checking status of ha-191772-m03 ...
	I0120 13:55:06.518799  803987 cli_runner.go:164] Run: docker container inspect ha-191772-m03 --format={{.State.Status}}
	I0120 13:55:06.542220  803987 status.go:371] ha-191772-m03 host status = "Running" (err=<nil>)
	I0120 13:55:06.542247  803987 host.go:66] Checking if "ha-191772-m03" exists ...
	I0120 13:55:06.543084  803987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-191772-m03
	I0120 13:55:06.562129  803987 host.go:66] Checking if "ha-191772-m03" exists ...
	I0120 13:55:06.562444  803987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:55:06.562498  803987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-191772-m03
	I0120 13:55:06.586260  803987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/ha-191772-m03/id_rsa Username:docker}
	I0120 13:55:06.682682  803987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:55:06.694254  803987 kubeconfig.go:125] found "ha-191772" server: "https://192.168.49.254:8443"
	I0120 13:55:06.694284  803987 api_server.go:166] Checking apiserver status ...
	I0120 13:55:06.694326  803987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:55:06.705395  803987 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1329/cgroup
	I0120 13:55:06.716217  803987 api_server.go:182] apiserver freezer: "6:freezer:/docker/088423731bbac641a6441ddb3c47de33a3299c1e92276337619b6c5d1412ddb7/kubepods/burstable/pod897d02e0e26720513b1cea97f05d4400/f8481b6a97d75ce3f2d9af519eae60bb6b04e27f8d06f5e3ad8e4cf5d56ebeda"
	I0120 13:55:06.716288  803987 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/088423731bbac641a6441ddb3c47de33a3299c1e92276337619b6c5d1412ddb7/kubepods/burstable/pod897d02e0e26720513b1cea97f05d4400/f8481b6a97d75ce3f2d9af519eae60bb6b04e27f8d06f5e3ad8e4cf5d56ebeda/freezer.state
	I0120 13:55:06.724938  803987 api_server.go:204] freezer state: "THAWED"
	I0120 13:55:06.724969  803987 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 13:55:06.733072  803987 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 13:55:06.733099  803987 status.go:463] ha-191772-m03 apiserver status = Running (err=<nil>)
	I0120 13:55:06.733107  803987 status.go:176] ha-191772-m03 status: &{Name:ha-191772-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:55:06.733124  803987 status.go:174] checking status of ha-191772-m04 ...
	I0120 13:55:06.733425  803987 cli_runner.go:164] Run: docker container inspect ha-191772-m04 --format={{.State.Status}}
	I0120 13:55:06.750832  803987 status.go:371] ha-191772-m04 host status = "Running" (err=<nil>)
	I0120 13:55:06.750860  803987 host.go:66] Checking if "ha-191772-m04" exists ...
	I0120 13:55:06.751145  803987 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-191772-m04
	I0120 13:55:06.767700  803987 host.go:66] Checking if "ha-191772-m04" exists ...
	I0120 13:55:06.767997  803987 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:55:06.768054  803987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-191772-m04
	I0120 13:55:06.786679  803987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33574 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/ha-191772-m04/id_rsa Username:docker}
	I0120 13:55:06.881257  803987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:55:06.894549  803987 status.go:176] ha-191772-m04 status: &{Name:ha-191772-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 node start m02 -v=7 --alsologtostderr: (17.89957882s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr: (1.004491985s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.309449143s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-191772 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-191772 -v=7 --alsologtostderr
E0120 13:55:40.965493  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:40.971879  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:40.983244  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:41.004788  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:41.046221  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:41.128110  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:41.289658  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:41.611592  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:42.253672  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:43.535215  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:46.097762  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:55:51.219148  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:56:01.461049  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-191772 -v=7 --alsologtostderr: (37.039738469s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-191772 --wait=true -v=7 --alsologtostderr
E0120 13:56:21.942844  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:57:02.904487  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-191772 --wait=true -v=7 --alsologtostderr: (1m28.699998001s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-191772
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (125.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 node delete m03 -v=7 --alsologtostderr: (9.799182074s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-191772 stop -v=7 --alsologtostderr: (35.776783421s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr: exit status 7 (120.994997ms)

                                                
                                                
-- stdout --
	ha-191772
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-191772-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-191772-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:58:21.276350  818532 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:58:21.276532  818532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:58:21.276562  818532 out.go:358] Setting ErrFile to fd 2...
	I0120 13:58:21.276583  818532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:58:21.276831  818532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 13:58:21.277047  818532 out.go:352] Setting JSON to false
	I0120 13:58:21.277110  818532 mustload.go:65] Loading cluster: ha-191772
	I0120 13:58:21.277193  818532 notify.go:220] Checking for updates...
	I0120 13:58:21.277618  818532 config.go:182] Loaded profile config "ha-191772": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 13:58:21.277662  818532 status.go:174] checking status of ha-191772 ...
	I0120 13:58:21.278537  818532 cli_runner.go:164] Run: docker container inspect ha-191772 --format={{.State.Status}}
	I0120 13:58:21.297752  818532 status.go:371] ha-191772 host status = "Stopped" (err=<nil>)
	I0120 13:58:21.297774  818532 status.go:384] host is not running, skipping remaining checks
	I0120 13:58:21.297780  818532 status.go:176] ha-191772 status: &{Name:ha-191772 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:58:21.297809  818532 status.go:174] checking status of ha-191772-m02 ...
	I0120 13:58:21.298122  818532 cli_runner.go:164] Run: docker container inspect ha-191772-m02 --format={{.State.Status}}
	I0120 13:58:21.329567  818532 status.go:371] ha-191772-m02 host status = "Stopped" (err=<nil>)
	I0120 13:58:21.329630  818532 status.go:384] host is not running, skipping remaining checks
	I0120 13:58:21.329637  818532 status.go:176] ha-191772-m02 status: &{Name:ha-191772-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:58:21.329657  818532 status.go:174] checking status of ha-191772-m04 ...
	I0120 13:58:21.329966  818532 cli_runner.go:164] Run: docker container inspect ha-191772-m04 --format={{.State.Status}}
	I0120 13:58:21.347181  818532 status.go:371] ha-191772-m04 host status = "Stopped" (err=<nil>)
	I0120 13:58:21.347207  818532 status.go:384] host is not running, skipping remaining checks
	I0120 13:58:21.347214  818532 status.go:176] ha-191772-m04 status: &{Name:ha-191772-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-191772 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 13:58:24.828326  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:58:41.085510  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-191772 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.706469966s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-191772 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-191772 --control-plane -v=7 --alsologtostderr: (44.749901484s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-191772 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.010807655s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-824847 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0120 14:00:40.957261  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:01:08.669730  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-824847 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (56.474608634s)
--- PASS: TestJSONOutput/start/Command (56.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-824847 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-824847 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-824847 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-824847 --output=json --user=testUser: (5.798255166s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-608001 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-608001 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.207376ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d64bc10-a214-42d2-85a9-fd8cd354a436","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-608001] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"025b38ea-9e05-4691-a096-fdfad619d181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20242"}}
	{"specversion":"1.0","id":"3c3b0303-c90c-4cac-aca0-9dc7fd98015d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b0010ba-4f16-41de-bd7c-e7544af21ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig"}}
	{"specversion":"1.0","id":"e7a22ea2-a94d-4fb4-b824-497eb96dd24b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube"}}
	{"specversion":"1.0","id":"a90f99da-b6b7-4077-a00e-fc5b9afd2f95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dfaa8ffe-d323-4db8-be44-be7b82ed2a53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"67dc3bb5-445b-47e4-a9c2-c44ad0b4da3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-608001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-608001
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-759818 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-759818 --network=: (36.00084173s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-759818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-759818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-759818: (2.110805013s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-782949 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-782949 --network=bridge: (30.537925624s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-782949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-782949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-782949: (1.959419993s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.53s)

                                                
                                    
x
+
TestKicExistingNetwork (30.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0120 14:02:56.350808  747256 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0120 14:02:56.366543  747256 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0120 14:02:56.366617  747256 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0120 14:02:56.366636  747256 cli_runner.go:164] Run: docker network inspect existing-network
W0120 14:02:56.384613  747256 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0120 14:02:56.384644  747256 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0120 14:02:56.384659  747256 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0120 14:02:56.384756  747256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 14:02:56.401350  747256 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6680a3ea5430 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:de:d8:55:54} reservation:<nil>}
I0120 14:02:56.401736  747256 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001707e30}
I0120 14:02:56.401762  747256 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0120 14:02:56.401822  747256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0120 14:02:56.478281  747256 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-307281 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-307281 --network=existing-network: (28.825262621s)
helpers_test.go:175: Cleaning up "existing-network-307281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-307281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-307281: (1.967640174s)
I0120 14:03:27.287039  747256 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.95s)

                                                
                                    
x
+
TestKicCustomSubnet (39.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-911395 --subnet=192.168.60.0/24
E0120 14:03:41.084837  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-911395 --subnet=192.168.60.0/24: (37.23264063s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-911395 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-911395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-911395
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-911395: (2.145073289s)
--- PASS: TestKicCustomSubnet (39.41s)

                                                
                                    
x
+
TestKicStaticIP (35.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-067461 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-067461 --static-ip=192.168.200.200: (33.458666062s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-067461 ip
helpers_test.go:175: Cleaning up "static-ip-067461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-067461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-067461: (2.131874432s)
--- PASS: TestKicStaticIP (35.74s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (65.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-732383 --driver=docker  --container-runtime=containerd
E0120 14:05:04.150778  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-732383 --driver=docker  --container-runtime=containerd: (29.000036283s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-734968 --driver=docker  --container-runtime=containerd
E0120 14:05:40.956940  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-734968 --driver=docker  --container-runtime=containerd: (31.077026477s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-732383
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-734968
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-734968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-734968
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-734968: (2.098737855s)
helpers_test.go:175: Cleaning up "first-732383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-732383
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-732383: (1.982881913s)
--- PASS: TestMinikubeProfile (65.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-697193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-697193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.868395452s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-697193 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-702895 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-702895 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.697850773s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-702895 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-697193 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-697193 --alsologtostderr -v=5: (1.624632852s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-702895 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-702895
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-702895: (1.197655511s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-702895
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-702895: (6.187939632s)
--- PASS: TestMountStart/serial/RestartStopped (7.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-702895 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-483073 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-483073 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m48.843211284s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-483073 -- rollout status deployment/busybox: (13.337594615s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-fx9ft -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-gfd6c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-fx9ft -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-gfd6c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-fx9ft -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-gfd6c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-fx9ft -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-fx9ft -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-gfd6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-483073 -- exec busybox-58667487b6-gfd6c -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-483073 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-483073 -v 3 --alsologtostderr: (16.092797331s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-483073 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E0120 14:08:41.085402  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp testdata/cp-test.txt multinode-483073:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile290810176/001/cp-test_multinode-483073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073:/home/docker/cp-test.txt multinode-483073-m02:/home/docker/cp-test_multinode-483073_multinode-483073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m02 "sudo cat /home/docker/cp-test_multinode-483073_multinode-483073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073:/home/docker/cp-test.txt multinode-483073-m03:/home/docker/cp-test_multinode-483073_multinode-483073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m03 "sudo cat /home/docker/cp-test_multinode-483073_multinode-483073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp testdata/cp-test.txt multinode-483073-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile290810176/001/cp-test_multinode-483073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073-m02:/home/docker/cp-test.txt multinode-483073:/home/docker/cp-test_multinode-483073-m02_multinode-483073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073 "sudo cat /home/docker/cp-test_multinode-483073-m02_multinode-483073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073-m02:/home/docker/cp-test.txt multinode-483073-m03:/home/docker/cp-test_multinode-483073-m02_multinode-483073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m03 "sudo cat /home/docker/cp-test_multinode-483073-m02_multinode-483073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp testdata/cp-test.txt multinode-483073-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile290810176/001/cp-test_multinode-483073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073-m03:/home/docker/cp-test.txt multinode-483073:/home/docker/cp-test_multinode-483073-m03_multinode-483073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073 "sudo cat /home/docker/cp-test_multinode-483073-m03_multinode-483073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 cp multinode-483073-m03:/home/docker/cp-test.txt multinode-483073-m02:/home/docker/cp-test_multinode-483073-m03_multinode-483073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 ssh -n multinode-483073-m02 "sudo cat /home/docker/cp-test_multinode-483073-m03_multinode-483073-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-483073 node stop m03: (1.219554627s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-483073 status: exit status 7 (505.449923ms)

                                                
                                                
-- stdout --
	multinode-483073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-483073-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-483073-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr: exit status 7 (527.146557ms)

                                                
                                                
-- stdout --
	multinode-483073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-483073-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-483073-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:08:53.706757  872756 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:08:53.706869  872756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:08:53.706881  872756 out.go:358] Setting ErrFile to fd 2...
	I0120 14:08:53.706888  872756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:08:53.707250  872756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 14:08:53.707451  872756 out.go:352] Setting JSON to false
	I0120 14:08:53.707499  872756 mustload.go:65] Loading cluster: multinode-483073
	I0120 14:08:53.707972  872756 notify.go:220] Checking for updates...
	I0120 14:08:53.708551  872756 config.go:182] Loaded profile config "multinode-483073": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:08:53.708583  872756 status.go:174] checking status of multinode-483073 ...
	I0120 14:08:53.710372  872756 cli_runner.go:164] Run: docker container inspect multinode-483073 --format={{.State.Status}}
	I0120 14:08:53.731399  872756 status.go:371] multinode-483073 host status = "Running" (err=<nil>)
	I0120 14:08:53.731432  872756 host.go:66] Checking if "multinode-483073" exists ...
	I0120 14:08:53.731752  872756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-483073
	I0120 14:08:53.763455  872756 host.go:66] Checking if "multinode-483073" exists ...
	I0120 14:08:53.763776  872756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 14:08:53.763821  872756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-483073
	I0120 14:08:53.785927  872756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33679 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/multinode-483073/id_rsa Username:docker}
	I0120 14:08:53.879119  872756 ssh_runner.go:195] Run: systemctl --version
	I0120 14:08:53.884589  872756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:08:53.896311  872756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 14:08:53.958255  872756 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-20 14:08:53.94708448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 14:08:53.958976  872756 kubeconfig.go:125] found "multinode-483073" server: "https://192.168.67.2:8443"
	I0120 14:08:53.959015  872756 api_server.go:166] Checking apiserver status ...
	I0120 14:08:53.959065  872756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:08:53.970807  872756 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	I0120 14:08:53.980406  872756 api_server.go:182] apiserver freezer: "6:freezer:/docker/276875a6c4119a633569d9b4c97545595918fadfbf22ca1d624c1de9229cecd2/kubepods/burstable/pod74170c156b323f3a1825a16cd2682aa6/77a6ee1c4a6a4ecc4ee02eda38c11f185af77b873763366abc83a6b6aef71b04"
	I0120 14:08:53.980500  872756 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/276875a6c4119a633569d9b4c97545595918fadfbf22ca1d624c1de9229cecd2/kubepods/burstable/pod74170c156b323f3a1825a16cd2682aa6/77a6ee1c4a6a4ecc4ee02eda38c11f185af77b873763366abc83a6b6aef71b04/freezer.state
	I0120 14:08:53.989323  872756 api_server.go:204] freezer state: "THAWED"
	I0120 14:08:53.989350  872756 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0120 14:08:53.998263  872756 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0120 14:08:53.998295  872756 status.go:463] multinode-483073 apiserver status = Running (err=<nil>)
	I0120 14:08:53.998306  872756 status.go:176] multinode-483073 status: &{Name:multinode-483073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 14:08:53.998322  872756 status.go:174] checking status of multinode-483073-m02 ...
	I0120 14:08:53.998639  872756 cli_runner.go:164] Run: docker container inspect multinode-483073-m02 --format={{.State.Status}}
	I0120 14:08:54.017183  872756 status.go:371] multinode-483073-m02 host status = "Running" (err=<nil>)
	I0120 14:08:54.017219  872756 host.go:66] Checking if "multinode-483073-m02" exists ...
	I0120 14:08:54.017563  872756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-483073-m02
	I0120 14:08:54.035057  872756 host.go:66] Checking if "multinode-483073-m02" exists ...
	I0120 14:08:54.035432  872756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 14:08:54.035496  872756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-483073-m02
	I0120 14:08:54.053526  872756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33684 SSHKeyPath:/home/jenkins/minikube-integration/20242-741865/.minikube/machines/multinode-483073-m02/id_rsa Username:docker}
	I0120 14:08:54.147417  872756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:08:54.159565  872756 status.go:176] multinode-483073-m02 status: &{Name:multinode-483073-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 14:08:54.159601  872756 status.go:174] checking status of multinode-483073-m03 ...
	I0120 14:08:54.159919  872756 cli_runner.go:164] Run: docker container inspect multinode-483073-m03 --format={{.State.Status}}
	I0120 14:08:54.177029  872756 status.go:371] multinode-483073-m03 host status = "Stopped" (err=<nil>)
	I0120 14:08:54.177053  872756 status.go:384] host is not running, skipping remaining checks
	I0120 14:08:54.177059  872756 status.go:176] multinode-483073-m03 status: &{Name:multinode-483073-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-483073 node start m03 -v=7 --alsologtostderr: (9.424092949s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-483073
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-483073
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-483073: (24.902072998s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-483073 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-483073 --wait=true -v=8 --alsologtostderr: (1m4.146744443s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-483073
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-483073 node delete m03: (4.619459839s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 stop
E0120 14:10:40.957210  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-483073 stop: (23.741653129s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-483073 status: exit status 7 (109.973059ms)

                                                
                                                
-- stdout --
	multinode-483073
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-483073-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr: exit status 7 (99.740007ms)

                                                
                                                
-- stdout --
	multinode-483073
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-483073-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:11:02.754994  880790 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:11:02.755262  880790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:11:02.755291  880790 out.go:358] Setting ErrFile to fd 2...
	I0120 14:11:02.755312  880790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:11:02.755601  880790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 14:11:02.755834  880790 out.go:352] Setting JSON to false
	I0120 14:11:02.755906  880790 mustload.go:65] Loading cluster: multinode-483073
	I0120 14:11:02.756044  880790 notify.go:220] Checking for updates...
	I0120 14:11:02.756417  880790 config.go:182] Loaded profile config "multinode-483073": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:11:02.756443  880790 status.go:174] checking status of multinode-483073 ...
	I0120 14:11:02.757022  880790 cli_runner.go:164] Run: docker container inspect multinode-483073 --format={{.State.Status}}
	I0120 14:11:02.776705  880790 status.go:371] multinode-483073 host status = "Stopped" (err=<nil>)
	I0120 14:11:02.776730  880790 status.go:384] host is not running, skipping remaining checks
	I0120 14:11:02.776738  880790 status.go:176] multinode-483073 status: &{Name:multinode-483073 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 14:11:02.776769  880790 status.go:174] checking status of multinode-483073-m02 ...
	I0120 14:11:02.777103  880790 cli_runner.go:164] Run: docker container inspect multinode-483073-m02 --format={{.State.Status}}
	I0120 14:11:02.801269  880790 status.go:371] multinode-483073-m02 host status = "Stopped" (err=<nil>)
	I0120 14:11:02.801349  880790 status.go:384] host is not running, skipping remaining checks
	I0120 14:11:02.801360  880790 status.go:176] multinode-483073-m02 status: &{Name:multinode-483073-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-483073 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-483073 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.52210246s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-483073 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-483073
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-483073-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-483073-m02 --driver=docker  --container-runtime=containerd: exit status 14 (96.28007ms)

                                                
                                                
-- stdout --
	* [multinode-483073-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-483073-m02' is duplicated with machine name 'multinode-483073-m02' in profile 'multinode-483073'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-483073-m03 --driver=docker  --container-runtime=containerd
E0120 14:12:04.033855  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-483073-m03 --driver=docker  --container-runtime=containerd: (33.54631445s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-483073
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-483073: exit status 80 (320.100419ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-483073 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-483073-m03 already exists in multinode-483073-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-483073-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-483073-m03: (2.011452216s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.03s)

                                                
                                    
x
+
TestPreload (126.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-572862 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0120 14:13:41.085746  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-572862 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m28.382886585s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-572862 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-572862 image pull gcr.io/k8s-minikube/busybox: (2.324171946s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-572862
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-572862: (12.003537416s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-572862 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-572862 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.997900113s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-572862 image list
helpers_test.go:175: Cleaning up "test-preload-572862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-572862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-572862: (2.467409396s)
--- PASS: TestPreload (126.52s)

                                                
                                    
x
+
TestScheduledStopUnix (109.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-874633 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-874633 --memory=2048 --driver=docker  --container-runtime=containerd: (32.740481608s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874633 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-874633 -n scheduled-stop-874633
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874633 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 14:15:16.868134  747256 retry.go:31] will retry after 98.209µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.868619  747256 retry.go:31] will retry after 123.809µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.869761  747256 retry.go:31] will retry after 206.799µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.873307  747256 retry.go:31] will retry after 312.573µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.874512  747256 retry.go:31] will retry after 662.089µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.875644  747256 retry.go:31] will retry after 933.2µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.876843  747256 retry.go:31] will retry after 744.811µs: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.877993  747256 retry.go:31] will retry after 2.17736ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.881243  747256 retry.go:31] will retry after 2.511031ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.884549  747256 retry.go:31] will retry after 3.123551ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.888811  747256 retry.go:31] will retry after 3.118996ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.893052  747256 retry.go:31] will retry after 7.831653ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.901291  747256 retry.go:31] will retry after 9.753536ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.911538  747256 retry.go:31] will retry after 9.73174ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.921875  747256 retry.go:31] will retry after 23.332059ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
I0120 14:15:16.946161  747256 retry.go:31] will retry after 57.530432ms: open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/scheduled-stop-874633/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874633 --cancel-scheduled
E0120 14:15:40.964575  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-874633 -n scheduled-stop-874633
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-874633
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-874633 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-874633
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-874633: exit status 7 (76.877821ms)

                                                
                                                
-- stdout --
	scheduled-stop-874633
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-874633 -n scheduled-stop-874633
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-874633 -n scheduled-stop-874633: exit status 7 (72.919446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-874633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-874633
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-874633: (5.440293368s)
--- PASS: TestScheduledStopUnix (109.82s)

                                                
                                    
x
+
TestInsufficientStorage (13.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-377166 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-377166 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.66984128s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f1083c01-a0d9-4e4d-ba17-23a8bd77e662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-377166] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6573636e-a96c-45ec-ae3b-93997fe315e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20242"}}
	{"specversion":"1.0","id":"c8f5286e-92d8-459b-a689-8c88623d32a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d50ec1db-b446-4628-a6d9-dd2baff51dc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig"}}
	{"specversion":"1.0","id":"b9a453fc-fdf2-480d-988b-f45efed4c4f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube"}}
	{"specversion":"1.0","id":"60c9bbea-ccff-4534-a3ab-280ef78c5c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a9b58a7e-3df3-4192-b020-848d2b852151","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c8152a6d-2222-40bd-bf1c-6f80df627959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a26e3cf7-d484-415a-80fc-1bae952ceced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3e82162f-23b4-4ad1-b04a-0550c7d8849a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4928b457-6461-416f-bcd9-d5180f41b2ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a2f8fe3b-ed6d-4a55-b250-c60c42f2ff67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-377166\" primary control-plane node in \"insufficient-storage-377166\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8af907a1-f357-412a-ab31-4ae4fb8cb61a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bab8711-a921-4b3f-842b-59ce2e466939","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"af941bb3-314e-43f7-a95b-b35993b92c73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-377166 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-377166 --output=json --layout=cluster: exit status 7 (286.916375ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-377166","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-377166","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 14:16:44.354815  899667 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-377166" does not appear in /home/jenkins/minikube-integration/20242-741865/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-377166 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-377166 --output=json --layout=cluster: exit status 7 (285.084637ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-377166","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-377166","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 14:16:44.639354  899728 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-377166" does not appear in /home/jenkins/minikube-integration/20242-741865/kubeconfig
	E0120 14:16:44.649405  899728 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/insufficient-storage-377166/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-377166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-377166
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-377166: (1.94678809s)
--- PASS: TestInsufficientStorage (13.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.872200665 start -p running-upgrade-850665 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.872200665 start -p running-upgrade-850665 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.712318893s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-850665 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-850665 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.008367386s)
helpers_test.go:175: Cleaning up "running-upgrade-850665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-850665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-850665: (4.204785786s)
--- PASS: TestRunningBinaryUpgrade (94.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (108.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.643670976s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-496182
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-496182: (1.281769095s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-496182 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-496182 status --format={{.Host}}: exit status 7 (100.717508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.394195216s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-496182 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (150.894457ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-496182] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-496182
	    minikube start -p kubernetes-upgrade-496182 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4961822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-496182 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-496182 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.352002332s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-496182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-496182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-496182: (3.464321171s)
--- PASS: TestKubernetesUpgrade (108.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (184.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3035580265 start -p missing-upgrade-946661 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3035580265 start -p missing-upgrade-946661 --memory=2200 --driver=docker  --container-runtime=containerd: (1m34.842919937s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-946661
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-946661: (10.286125095s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-946661
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-946661 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0120 14:18:41.085407  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-946661 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.661521288s)
helpers_test.go:175: Cleaning up "missing-upgrade-946661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-946661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-946661: (4.713903525s)
--- PASS: TestMissingContainerUpgrade (184.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017185 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-017185 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (101.011483ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-017185] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017185 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017185 --driver=docker  --container-runtime=containerd: (38.755630265s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-017185 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017185 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017185 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.167021522s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-017185 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-017185 status -o json: exit status 2 (297.081899ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-017185","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-017185
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-017185: (1.996195742s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017185 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017185 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.565680821s)
--- PASS: TestNoKubernetes/serial/Start (9.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-017185 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-017185 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.60342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-017185
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-017185: (1.197899991s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017185 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017185 --driver=docker  --container-runtime=containerd: (6.339872144s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-017185 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-017185 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.889342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2113146554 start -p stopped-upgrade-782967 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2113146554 start -p stopped-upgrade-782967 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.191703447s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2113146554 -p stopped-upgrade-782967 stop
E0120 14:20:40.956758  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2113146554 -p stopped-upgrade-782967 stop: (20.156400759s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-782967 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-782967 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.26781265s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.62s)

                                                
                                    
x
+
TestPause/serial/Start (103.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-853381 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-853381 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m43.331785296s)
--- PASS: TestPause/serial/Start (103.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-782967
E0120 14:21:44.152334  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-782967: (1.499243932s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-465957 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-465957 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (204.518641ms)

                                                
                                                
-- stdout --
	* [false-465957] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:22:31.760212  934782 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:22:31.760391  934782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:22:31.760403  934782 out.go:358] Setting ErrFile to fd 2...
	I0120 14:22:31.760409  934782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:22:31.760664  934782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-741865/.minikube/bin
	I0120 14:22:31.761106  934782 out.go:352] Setting JSON to false
	I0120 14:22:31.762213  934782 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14697,"bootTime":1737368255,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 14:22:31.762291  934782 start.go:139] virtualization:  
	I0120 14:22:31.766210  934782 out.go:177] * [false-465957] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 14:22:31.769369  934782 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:22:31.770318  934782 notify.go:220] Checking for updates...
	I0120 14:22:31.776597  934782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:22:31.779787  934782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-741865/kubeconfig
	I0120 14:22:31.782563  934782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-741865/.minikube
	I0120 14:22:31.785374  934782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 14:22:31.788299  934782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:22:31.791788  934782 config.go:182] Loaded profile config "pause-853381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 14:22:31.791894  934782 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:22:31.821816  934782 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 14:22:31.821947  934782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 14:22:31.895222  934782 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 14:22:31.878775893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 14:22:31.895353  934782 docker.go:318] overlay module found
	I0120 14:22:31.898660  934782 out.go:177] * Using the docker driver based on user configuration
	I0120 14:22:31.901646  934782 start.go:297] selected driver: docker
	I0120 14:22:31.901686  934782 start.go:901] validating driver "docker" against <nil>
	I0120 14:22:31.901702  934782 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:22:31.905273  934782 out.go:201] 
	W0120 14:22:31.908206  934782 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0120 14:22:31.911211  934782 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-465957 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-465957" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 14:22:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-853381
contexts:
- context:
cluster: pause-853381
extensions:
- extension:
last-update: Mon, 20 Jan 2025 14:22:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-853381
name: pause-853381
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-853381
user:
client-certificate: /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/pause-853381/client.crt
client-key: /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/pause-853381/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-465957

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-465957"

                                                
                                                
----------------------- debugLogs end: false-465957 [took: 3.523688121s] --------------------------------
helpers_test.go:175: Cleaning up "false-465957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-465957
--- PASS: TestNetworkPlugins/group/false (3.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-853381 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-853381 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.253918475s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-853381 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-853381 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-853381 --output=json --layout=cluster: exit status 2 (438.351522ms)

                                                
                                                
-- stdout --
	{"Name":"pause-853381","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-853381","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-853381 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-853381 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-853381 --alsologtostderr -v=5: (1.010174152s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.15s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-853381 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-853381 --alsologtostderr -v=5: (3.148729589s)
--- PASS: TestPause/serial/DeletePaused (3.15s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-853381
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-853381: exit status 1 (23.420685ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-853381: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0120 14:25:40.957406  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-140749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m27.857465316s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-140749 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3c09ea94-5e96-489e-af1c-cc815019abb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3c09ea94-5e96-489e-af1c-cc815019abb8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003512393s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-140749 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-140749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-140749 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-140749 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-140749 --alsologtostderr -v=3: (12.065362727s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140749 -n old-k8s-version-140749
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-140749 -n old-k8s-version-140749: exit status 7 (104.212802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-140749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-193023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-193023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m17.08198819s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-193023 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [096817f5-f6af-48b3-8669-a7e42ce04689] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [096817f5-f6af-48b3-8669-a7e42ce04689] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.007229121s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-193023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-193023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-193023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046969617s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-193023 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-193023 --alsologtostderr -v=3
E0120 14:28:41.084965  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-193023 --alsologtostderr -v=3: (12.114385043s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-193023 -n no-preload-193023
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-193023 -n no-preload-193023: exit status 7 (80.526171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-193023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-193023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:28:44.035380  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:30:40.956901  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-193023 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m27.179894913s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-193023 -n no-preload-193023
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xpbsn" [86524404-fe7a-44e4-ba08-66f0287433e3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004883014s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rckbc" [d27bc21b-d9a1-4fa3-bac8-53e222d90a2b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005534307s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xpbsn" [86524404-fe7a-44e4-ba08-66f0287433e3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00393895s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-193023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rckbc" [d27bc21b-d9a1-4fa3-bac8-53e222d90a2b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004883914s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-140749 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-193023 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-140749 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-193023 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-193023 --alsologtostderr -v=1: (1.076593596s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-193023 -n no-preload-193023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-193023 -n no-preload-193023: exit status 2 (392.244428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-193023 -n no-preload-193023
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-193023 -n no-preload-193023: exit status 2 (458.633842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-193023 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-193023 -n no-preload-193023
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-193023 -n no-preload-193023
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-140749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-140749 --alsologtostderr -v=1: (1.044242723s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140749 -n old-k8s-version-140749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140749 -n old-k8s-version-140749: exit status 2 (436.450488ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140749 -n old-k8s-version-140749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140749 -n old-k8s-version-140749: exit status 2 (474.075753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-140749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-140749 --alsologtostderr -v=1: (1.127552373s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-140749 -n old-k8s-version-140749
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-140749 -n old-k8s-version-140749
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-179859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-179859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m36.144302585s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-857589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:33:41.086087  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-857589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m37.341735728s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-179859 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85584472-6269-42f9-ae3f-ae243052a2bf] Pending
helpers_test.go:344: "busybox" [85584472-6269-42f9-ae3f-ae243052a2bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85584472-6269-42f9-ae3f-ae243052a2bf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003905194s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-179859 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-857589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08fd2f8f-44e6-4f4b-b187-eff5ed1d36d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [08fd2f8f-44e6-4f4b-b187-eff5ed1d36d4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003811625s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-857589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-179859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-179859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.061348469s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-179859 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-179859 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-179859 --alsologtostderr -v=3: (12.2556434s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-857589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-857589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06481662s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-857589 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-857589 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-857589 --alsologtostderr -v=3: (11.994083148s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-179859 -n embed-certs-179859
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-179859 -n embed-certs-179859: exit status 7 (75.083006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-179859 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (269.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-179859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-179859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m29.119259708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-179859 -n embed-certs-179859
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (269.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589: exit status 7 (72.737151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-857589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-857589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:35:40.956703  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.027012  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.033492  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.044984  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.066481  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.108036  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.189548  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.351504  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:34.673288  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:35.315363  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:36.597292  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:39.159060  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:44.281144  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:36:54.523379  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:37:15.005672  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:37:55.970195  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.212079  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.218531  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.229957  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.251404  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.292798  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.374206  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.535814  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:20.857503  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:21.499839  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:22.781437  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:24.154682  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:25.342800  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:30.464859  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:40.706626  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:38:41.084754  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:39:01.188753  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:39:17.891893  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:39:42.150131  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-857589 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (5m0.928807885s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-j5cds" [6f661683-d4db-4e86-a4c4-4f00f9c34882] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003763249s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-j5cds" [6f661683-d4db-4e86-a4c4-4f00f9c34882] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004556495s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-179859 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-179859 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-179859 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-179859 -n embed-certs-179859
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-179859 -n embed-certs-179859: exit status 2 (326.562273ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-179859 -n embed-certs-179859
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-179859 -n embed-certs-179859: exit status 2 (342.245424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-179859 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-179859 -n embed-certs-179859
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-179859 -n embed-certs-179859
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-122155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-122155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (36.811088363s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pdzkl" [1142bcb8-15f4-4547-8302-f405ce075be3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004707876s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pdzkl" [1142bcb8-15f4-4547-8302-f405ce075be3] Running
E0120 14:40:40.957463  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004134751s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-857589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-857589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-857589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589: exit status 2 (407.451023ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589: exit status 2 (436.568651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-857589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-857589 -n default-k8s-diff-port-857589
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m27.733322707s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-122155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-122155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.86015911s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-122155 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-122155 --alsologtostderr -v=3: (3.679783928s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-122155 -n newest-cni-122155
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-122155 -n newest-cni-122155: exit status 7 (117.758722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-122155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-122155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:41:04.072468  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-122155 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (25.202670832s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-122155 -n newest-cni-122155
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-122155 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-122155 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-122155 -n newest-cni-122155
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-122155 -n newest-cni-122155: exit status 2 (415.026226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-122155 -n newest-cni-122155
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-122155 -n newest-cni-122155: exit status 2 (495.684796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-122155 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-122155 -n newest-cni-122155
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-122155 -n newest-cni-122155
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.83s)
E0120 14:47:19.746043  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:19.752540  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:19.763995  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:19.785432  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:19.826916  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:19.908318  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:20.069889  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:20.391766  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:21.034032  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:22.316010  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.412651  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.419032  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.430410  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.451858  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.493244  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.575435  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.737661  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:24.878178  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:25.059920  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:25.701436  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0120 14:41:34.026640  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:42:01.733994  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/old-k8s-version-140749/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (53.4758505s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-465957 "pgrep -a kubelet"
I0120 14:42:19.482103  747256 config.go:182] Loaded profile config "auto-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-w4zvx" [5e095a5c-1c8f-42d6-a3e4-d17180452830] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-w4zvx" [5e095a5c-1c8f-42d6-a3e4-d17180452830] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004949927s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dzklh" [c1edd2aa-4f75-4f9a-9551-7230c9ec3f49] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00474115s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-465957 "pgrep -a kubelet"
I0120 14:42:30.703301  747256 config.go:182] Loaded profile config "kindnet-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wc2dm" [92733806-d6cc-4fe2-ab45-b3ec46496edf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wc2dm" [92733806-d6cc-4fe2-ab45-b3ec46496edf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.008734915s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.95207171s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0120 14:43:20.212003  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:43:41.084885  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/addons-695399/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:43:47.915044  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/no-preload-193023/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.8510051s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-465957 "pgrep -a kubelet"
I0120 14:44:06.332713  747256 config.go:182] Loaded profile config "custom-flannel-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-czlm6" [44995af3-d71d-45df-a477-870d29aa9fdc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-czlm6" [44995af3-d71d-45df-a477-870d29aa9fdc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003840013s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rhtfw" [20204b50-0419-44b1-bcb0-2aeb58f98227] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.010777607s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-465957 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I0120 14:44:17.144755  747256 config.go:182] Loaded profile config "calico-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vv75g" [653272e6-4d33-4380-8d0a-aa489b40c8ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vv75g" [653272e6-4d33-4380-8d0a-aa489b40c8ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004075747s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.065609437s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0120 14:45:07.108571  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.114921  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.126228  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.147553  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.188886  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.270213  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.431626  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:07.753186  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:08.395226  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:09.676705  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:12.238222  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:17.360175  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:24.037125  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:45:27.602057  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/default-k8s-diff-port-857589/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.09207143s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-465957 "pgrep -a kubelet"
I0120 14:45:32.085632  747256 config.go:182] Loaded profile config "enable-default-cni-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t7bf8" [d73a1aca-0a7e-4709-bc13-1b894a40c265] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t7bf8" [d73a1aca-0a7e-4709-bc13-1b894a40c265] Running
E0120 14:45:40.957172  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/functional-563229/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003980984s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cqjbt" [44f3a0de-58ea-43bb-b81c-c88a8ac5a2fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005200641s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-465957 "pgrep -a kubelet"
I0120 14:46:01.413484  747256 config.go:182] Loaded profile config "flannel-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vphqq" [79f2007c-4c8e-40c0-b995-3ecb6f1440c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vphqq" [79f2007c-4c8e-40c0-b995-3ecb6f1440c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004292447s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-465957 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m20.751627326s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-465957 "pgrep -a kubelet"
I0120 14:47:26.472290  747256 config.go:182] Loaded profile config "bridge-465957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-465957 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pddjv" [671691a8-491d-48f8-bd06-7bee2c42651d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 14:47:26.983038  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:29.545137  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-pddjv" [671691a8-491d-48f8-bd06-7bee2c42651d] Running
E0120 14:47:30.000240  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/auto-465957/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:47:34.667085  747256 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/kindnet-465957/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00414939s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-465957 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-465957 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-834369 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-834369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-834369
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-862248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-862248
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-465957 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-465957" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 14:22:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-853381
contexts:
- context:
cluster: pause-853381
extensions:
- extension:
last-update: Mon, 20 Jan 2025 14:22:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-853381
name: pause-853381
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-853381
user:
client-certificate: /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/pause-853381/client.crt
client-key: /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/pause-853381/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-465957

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-465957"

                                                
                                                
----------------------- debugLogs end: kubenet-465957 [took: 3.769201444s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-465957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-465957
--- SKIP: TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-465957 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-465957" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-741865/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 14:22:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-853381
contexts:
- context:
cluster: pause-853381
extensions:
- extension:
last-update: Mon, 20 Jan 2025 14:22:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-853381
name: pause-853381
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-853381
user:
client-certificate: /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/pause-853381/client.crt
client-key: /home/jenkins/minikube-integration/20242-741865/.minikube/profiles/pause-853381/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-465957

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-465957" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-465957"

                                                
                                                
----------------------- debugLogs end: cilium-465957 [took: 4.076464097s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-465957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-465957
--- SKIP: TestNetworkPlugins/group/cilium (4.24s)

                                                
                                    
Copied to clipboard