Test Report: Docker_Linux_containerd_arm64 20151

                    
                      33072eff0e89b858b45dc04bb45c552eedaf3583:2025-01-20:37991
                    
                

Test fail (2/282)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 378.09
351 TestStartStop/group/no-preload/serial/Pause 7200.083
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.521511705s)

                                                
                                                
-- stdout --
	* [old-k8s-version-618033] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-618033" primary control-plane node in "old-k8s-version-618033" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-618033" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-618033 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:26:52.472735  663170 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:26:52.472884  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:26:52.472912  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:26:52.472931  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:26:52.473227  663170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 12:26:52.473730  663170 out.go:352] Setting JSON to false
	I0120 12:26:52.474835  663170 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7758,"bootTime":1737368255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 12:26:52.474911  663170 start.go:139] virtualization:  
	I0120 12:26:52.478171  663170 out.go:177] * [old-k8s-version-618033] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 12:26:52.482049  663170 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:26:52.482178  663170 notify.go:220] Checking for updates...
	I0120 12:26:52.488295  663170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:26:52.491245  663170 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 12:26:52.494196  663170 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	I0120 12:26:52.497157  663170 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 12:26:52.500153  663170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:26:52.503748  663170 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 12:26:52.507369  663170 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 12:26:52.510244  663170 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:26:52.539043  663170 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 12:26:52.539177  663170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 12:26:52.599859  663170 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:26:52.589340967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 12:26:52.600056  663170 docker.go:318] overlay module found
	I0120 12:26:52.603234  663170 out.go:177] * Using the docker driver based on existing profile
	I0120 12:26:52.606992  663170 start.go:297] selected driver: docker
	I0120 12:26:52.607020  663170 start.go:901] validating driver "docker" against &{Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:26:52.607246  663170 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:26:52.608136  663170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 12:26:52.670376  663170 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:26:52.661124274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 12:26:52.670787  663170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:26:52.670816  663170 cni.go:84] Creating CNI manager for ""
	I0120 12:26:52.670857  663170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 12:26:52.670902  663170 start.go:340] cluster config:
	{Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:26:52.675861  663170 out.go:177] * Starting "old-k8s-version-618033" primary control-plane node in "old-k8s-version-618033" cluster
	I0120 12:26:52.678678  663170 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 12:26:52.681617  663170 out.go:177] * Pulling base image v0.0.46 ...
	I0120 12:26:52.684463  663170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 12:26:52.684500  663170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 12:26:52.684526  663170 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 12:26:52.684536  663170 cache.go:56] Caching tarball of preloaded images
	I0120 12:26:52.684619  663170 preload.go:172] Found /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0120 12:26:52.684630  663170 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0120 12:26:52.684757  663170 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/config.json ...
	I0120 12:26:52.705465  663170 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0120 12:26:52.705489  663170 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0120 12:26:52.705508  663170 cache.go:227] Successfully downloaded all kic artifacts
	I0120 12:26:52.705540  663170 start.go:360] acquireMachinesLock for old-k8s-version-618033: {Name:mkb3e387ac9b6c1340636316ee22387c36aa6166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:26:52.705653  663170 start.go:364] duration metric: took 89.896µs to acquireMachinesLock for "old-k8s-version-618033"
	I0120 12:26:52.705743  663170 start.go:96] Skipping create...Using existing machine configuration
	I0120 12:26:52.705751  663170 fix.go:54] fixHost starting: 
	I0120 12:26:52.706226  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:52.725106  663170 fix.go:112] recreateIfNeeded on old-k8s-version-618033: state=Stopped err=<nil>
	W0120 12:26:52.725136  663170 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 12:26:52.728528  663170 out.go:177] * Restarting existing docker container for "old-k8s-version-618033" ...
	I0120 12:26:52.731352  663170 cli_runner.go:164] Run: docker start old-k8s-version-618033
	I0120 12:26:53.030735  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:53.054739  663170 kic.go:430] container "old-k8s-version-618033" state is running.
	I0120 12:26:53.055157  663170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-618033
	I0120 12:26:53.078614  663170 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/config.json ...
	I0120 12:26:53.078843  663170 machine.go:93] provisionDockerMachine start ...
	I0120 12:26:53.078903  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:53.101652  663170 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:53.101933  663170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0120 12:26:53.101943  663170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:26:53.102913  663170 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0120 12:26:56.229030  663170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-618033
	
	I0120 12:26:56.229099  663170 ubuntu.go:169] provisioning hostname "old-k8s-version-618033"
	I0120 12:26:56.229173  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:56.247570  663170 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:56.247834  663170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0120 12:26:56.247855  663170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-618033 && echo "old-k8s-version-618033" | sudo tee /etc/hostname
	I0120 12:26:56.382124  663170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-618033
	
	I0120 12:26:56.382204  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:56.400193  663170 main.go:141] libmachine: Using SSH client type: native
	I0120 12:26:56.400525  663170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0120 12:26:56.400638  663170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-618033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-618033/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-618033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:26:56.525972  663170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:26:56.525998  663170 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20151-446459/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-446459/.minikube}
	I0120 12:26:56.526027  663170 ubuntu.go:177] setting up certificates
	I0120 12:26:56.526038  663170 provision.go:84] configureAuth start
	I0120 12:26:56.526099  663170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-618033
	I0120 12:26:56.543361  663170 provision.go:143] copyHostCerts
	I0120 12:26:56.543430  663170 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem, removing ...
	I0120 12:26:56.543444  663170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem
	I0120 12:26:56.543518  663170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem (1082 bytes)
	I0120 12:26:56.543638  663170 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem, removing ...
	I0120 12:26:56.543656  663170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem
	I0120 12:26:56.543686  663170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem (1123 bytes)
	I0120 12:26:56.543745  663170 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem, removing ...
	I0120 12:26:56.543753  663170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem
	I0120 12:26:56.543779  663170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem (1675 bytes)
	I0120 12:26:56.543835  663170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-618033 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-618033]
	I0120 12:26:56.797916  663170 provision.go:177] copyRemoteCerts
	I0120 12:26:56.797993  663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:26:56.798050  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:56.816238  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:56.910907  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 12:26:56.935780  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 12:26:56.964919  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:26:56.991221  663170 provision.go:87] duration metric: took 465.16926ms to configureAuth
	I0120 12:26:56.991254  663170 ubuntu.go:193] setting minikube options for container-runtime
	I0120 12:26:56.991464  663170 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 12:26:56.991478  663170 machine.go:96] duration metric: took 3.91262644s to provisionDockerMachine
	I0120 12:26:56.991486  663170 start.go:293] postStartSetup for "old-k8s-version-618033" (driver="docker")
	I0120 12:26:56.991497  663170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:26:56.991553  663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:26:56.991599  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:57.028683  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:57.123636  663170 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:26:57.127074  663170 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 12:26:57.127114  663170 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 12:26:57.127134  663170 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 12:26:57.127142  663170 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 12:26:57.127153  663170 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/addons for local assets ...
	I0120 12:26:57.127215  663170 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/files for local assets ...
	I0120 12:26:57.127308  663170 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem -> 4518352.pem in /etc/ssl/certs
	I0120 12:26:57.127434  663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:26:57.136651  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /etc/ssl/certs/4518352.pem (1708 bytes)
	I0120 12:26:57.163857  663170 start.go:296] duration metric: took 172.353896ms for postStartSetup
	I0120 12:26:57.163948  663170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 12:26:57.163993  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:57.182335  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:57.274452  663170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 12:26:57.278997  663170 fix.go:56] duration metric: took 4.573235899s for fixHost
	I0120 12:26:57.279022  663170 start.go:83] releasing machines lock for "old-k8s-version-618033", held for 4.573308096s
	I0120 12:26:57.279096  663170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-618033
	I0120 12:26:57.296365  663170 ssh_runner.go:195] Run: cat /version.json
	I0120 12:26:57.296424  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:57.296703  663170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:26:57.296754  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:57.314709  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:57.332055  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:57.409216  663170 ssh_runner.go:195] Run: systemctl --version
	I0120 12:26:57.541977  663170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 12:26:57.546985  663170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0120 12:26:57.567729  663170 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0120 12:26:57.567825  663170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:26:57.576894  663170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 12:26:57.576927  663170 start.go:495] detecting cgroup driver to use...
	I0120 12:26:57.576960  663170 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 12:26:57.577022  663170 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 12:26:57.591237  663170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 12:26:57.611185  663170 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:26:57.611281  663170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:26:57.624724  663170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:26:57.637354  663170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:26:57.739028  663170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:26:57.871908  663170 docker.go:233] disabling docker service ...
	I0120 12:26:57.871984  663170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:26:57.888145  663170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:26:57.904839  663170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:26:58.033309  663170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:26:58.146945  663170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:26:58.164074  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:26:58.185362  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0120 12:26:58.198087  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 12:26:58.209764  663170 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 12:26:58.209890  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 12:26:58.222329  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:26:58.233220  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 12:26:58.243859  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:26:58.254316  663170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:26:58.263730  663170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 12:26:58.274389  663170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:26:58.283291  663170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:26:58.291912  663170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:26:58.381082  663170 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:26:58.543727  663170 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 12:26:58.543801  663170 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:26:58.547896  663170 start.go:563] Will wait 60s for crictl version
	I0120 12:26:58.548012  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:26:58.552300  663170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:26:58.595990  663170 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0120 12:26:58.596072  663170 ssh_runner.go:195] Run: containerd --version
	I0120 12:26:58.622667  663170 ssh_runner.go:195] Run: containerd --version
	I0120 12:26:58.648645  663170 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0120 12:26:58.651904  663170 cli_runner.go:164] Run: docker network inspect old-k8s-version-618033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 12:26:58.672645  663170 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0120 12:26:58.676682  663170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:26:58.688048  663170 kubeadm.go:883] updating cluster {Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:26:58.688177  663170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 12:26:58.688237  663170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:26:58.725695  663170 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:26:58.725720  663170 containerd.go:534] Images already preloaded, skipping extraction
	I0120 12:26:58.725805  663170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:26:58.771363  663170 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:26:58.771392  663170 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:26:58.771402  663170 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0120 12:26:58.771539  663170 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-618033 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:26:58.771614  663170 ssh_runner.go:195] Run: sudo crictl info
	I0120 12:26:58.811582  663170 cni.go:84] Creating CNI manager for ""
	I0120 12:26:58.811611  663170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 12:26:58.811623  663170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:26:58.811675  663170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-618033 NodeName:old-k8s-version-618033 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 12:26:58.811844  663170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-618033"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:26:58.811945  663170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 12:26:58.821057  663170 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:26:58.821153  663170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:26:58.830239  663170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0120 12:26:58.854688  663170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:26:58.873324  663170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0120 12:26:58.892395  663170 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0120 12:26:58.895899  663170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:26:58.907300  663170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:26:59.046687  663170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:26:59.065284  663170 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033 for IP: 192.168.85.2
	I0120 12:26:59.065311  663170 certs.go:194] generating shared ca certs ...
	I0120 12:26:59.065328  663170 certs.go:226] acquiring lock for ca certs: {Name:mkcccec907119c13813a959b3b756156d7101c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:26:59.065535  663170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key
	I0120 12:26:59.065620  663170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key
	I0120 12:26:59.065638  663170 certs.go:256] generating profile certs ...
	I0120 12:26:59.065739  663170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.key
	I0120 12:26:59.065861  663170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/apiserver.key.a7955a31
	I0120 12:26:59.065930  663170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/proxy-client.key
	I0120 12:26:59.066084  663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem (1338 bytes)
	W0120 12:26:59.066136  663170 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835_empty.pem, impossibly tiny 0 bytes
	I0120 12:26:59.066150  663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 12:26:59.066191  663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem (1082 bytes)
	I0120 12:26:59.066233  663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:26:59.066267  663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem (1675 bytes)
	I0120 12:26:59.066313  663170 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem (1708 bytes)
	I0120 12:26:59.067141  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:26:59.099113  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:26:59.133809  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:26:59.194639  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:26:59.229900  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 12:26:59.261290  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:26:59.289275  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:26:59.314283  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 12:26:59.339248  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /usr/share/ca-certificates/4518352.pem (1708 bytes)
	I0120 12:26:59.365723  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:26:59.390722  663170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem --> /usr/share/ca-certificates/451835.pem (1338 bytes)
	I0120 12:26:59.415267  663170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:26:59.433136  663170 ssh_runner.go:195] Run: openssl version
	I0120 12:26:59.439087  663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4518352.pem && ln -fs /usr/share/ca-certificates/4518352.pem /etc/ssl/certs/4518352.pem"
	I0120 12:26:59.449175  663170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4518352.pem
	I0120 12:26:59.452764  663170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:45 /usr/share/ca-certificates/4518352.pem
	I0120 12:26:59.452853  663170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4518352.pem
	I0120 12:26:59.459802  663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4518352.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:26:59.469014  663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:26:59.478543  663170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:26:59.482479  663170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:38 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:26:59.482549  663170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:26:59.490026  663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:26:59.499564  663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/451835.pem && ln -fs /usr/share/ca-certificates/451835.pem /etc/ssl/certs/451835.pem"
	I0120 12:26:59.509525  663170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/451835.pem
	I0120 12:26:59.513258  663170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:45 /usr/share/ca-certificates/451835.pem
	I0120 12:26:59.513330  663170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/451835.pem
	I0120 12:26:59.520584  663170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/451835.pem /etc/ssl/certs/51391683.0"
	I0120 12:26:59.530120  663170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:26:59.534495  663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 12:26:59.542138  663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 12:26:59.549707  663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 12:26:59.558567  663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 12:26:59.566528  663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 12:26:59.576313  663170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 12:26:59.585436  663170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-618033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-618033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:26:59.585527  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 12:26:59.585683  663170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:26:59.629925  663170 cri.go:89] found id: "3513d77a54b31bffdcc1bbcf5c23a22ceb456d92983f2bf891fef527b1e11c79"
	I0120 12:26:59.629959  663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:26:59.629965  663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:26:59.629969  663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:26:59.629972  663170 cri.go:89] found id: "9356782751c42b29ae874fda487e04d94022a03286f14a2f8339eba1d542c7f1"
	I0120 12:26:59.629976  663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:26:59.630000  663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:26:59.630012  663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:26:59.630015  663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:26:59.630026  663170 cri.go:89] found id: ""
	I0120 12:26:59.630091  663170 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0120 12:26:59.642447  663170 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-20T12:26:59Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0120 12:26:59.642521  663170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:26:59.652089  663170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 12:26:59.652108  663170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 12:26:59.652187  663170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 12:26:59.660764  663170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 12:26:59.661365  663170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-618033" does not appear in /home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 12:26:59.661731  663170 kubeconfig.go:62] /home/jenkins/minikube-integration/20151-446459/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-618033" cluster setting kubeconfig missing "old-k8s-version-618033" context setting]
	I0120 12:26:59.662202  663170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/kubeconfig: {Name:mkd202431392e920a92afeece62697072b25ee29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:26:59.663634  663170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 12:26:59.672553  663170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0120 12:26:59.672641  663170 kubeadm.go:597] duration metric: took 20.524986ms to restartPrimaryControlPlane
	I0120 12:26:59.672658  663170 kubeadm.go:394] duration metric: took 87.229837ms to StartCluster
	I0120 12:26:59.672675  663170 settings.go:142] acquiring lock: {Name:mka92edde1befc8914a01871e41167ef1a7b90c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:26:59.672749  663170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 12:26:59.673666  663170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/kubeconfig: {Name:mkd202431392e920a92afeece62697072b25ee29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:26:59.673908  663170 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:26:59.674285  663170 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 12:26:59.674355  663170 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:26:59.674441  663170 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-618033"
	I0120 12:26:59.674449  663170 addons.go:69] Setting dashboard=true in profile "old-k8s-version-618033"
	I0120 12:26:59.674456  663170 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-618033"
	W0120 12:26:59.674463  663170 addons.go:247] addon storage-provisioner should already be in state true
	I0120 12:26:59.674466  663170 addons.go:238] Setting addon dashboard=true in "old-k8s-version-618033"
	W0120 12:26:59.674473  663170 addons.go:247] addon dashboard should already be in state true
	I0120 12:26:59.674488  663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
	I0120 12:26:59.674494  663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
	I0120 12:26:59.674927  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:59.675089  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:59.675574  663170 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-618033"
	I0120 12:26:59.675602  663170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-618033"
	I0120 12:26:59.675886  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:59.679790  663170 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-618033"
	I0120 12:26:59.679820  663170 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-618033"
	W0120 12:26:59.679828  663170 addons.go:247] addon metrics-server should already be in state true
	I0120 12:26:59.679862  663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
	I0120 12:26:59.679917  663170 out.go:177] * Verifying Kubernetes components...
	I0120 12:26:59.680333  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:59.685904  663170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:26:59.725715  663170 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 12:26:59.732707  663170 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 12:26:59.736225  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 12:26:59.736254  663170 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 12:26:59.736338  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:59.753277  663170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:26:59.756593  663170 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:26:59.756616  663170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:26:59.756683  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:59.761139  663170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 12:26:59.769706  663170 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:26:59.769761  663170 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:26:59.769835  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:59.782146  663170 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-618033"
	W0120 12:26:59.782171  663170 addons.go:247] addon default-storageclass should already be in state true
	I0120 12:26:59.782197  663170 host.go:66] Checking if "old-k8s-version-618033" exists ...
	I0120 12:26:59.782611  663170 cli_runner.go:164] Run: docker container inspect old-k8s-version-618033 --format={{.State.Status}}
	I0120 12:26:59.818964  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:59.833932  663170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:26:59.855112  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:59.856845  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:59.865501  663170 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:26:59.865526  663170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:26:59.865606  663170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-618033
	I0120 12:26:59.900712  663170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/old-k8s-version-618033/id_rsa Username:docker}
	I0120 12:26:59.902855  663170 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-618033" to be "Ready" ...
	I0120 12:26:59.967635  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:27:00.010029  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 12:27:00.010058  663170 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 12:27:00.050914  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:27:00.055656  663170 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:27:00.055738  663170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 12:27:00.095443  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 12:27:00.095526  663170 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 12:27:00.104190  663170 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:27:00.104290  663170 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0120 12:27:00.169211  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.169422  663170 retry.go:31] will retry after 371.274924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.176423  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 12:27:00.176506  663170 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 12:27:00.176621  663170 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:27:00.176647  663170 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:27:00.221713  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 12:27:00.221777  663170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 12:27:00.223969  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 12:27:00.254834  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.254938  663170 retry.go:31] will retry after 284.770624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.266988  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 12:27:00.267099  663170 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 12:27:00.302692  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 12:27:00.302790  663170 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 12:27:00.328890  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 12:27:00.328975  663170 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 12:27:00.360106  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 12:27:00.360185  663170 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0120 12:27:00.364469  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.364512  663170 retry.go:31] will retry after 323.50775ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.386644  663170 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:27:00.386672  663170 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 12:27:00.408309  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 12:27:00.486324  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.486372  663170 retry.go:31] will retry after 137.676303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.540528  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:27:00.541065  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:27:00.624878  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:27:00.688601  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 12:27:00.887225  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.887257  663170 retry.go:31] will retry after 234.141798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:00.887310  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.887316  663170 retry.go:31] will retry after 476.099981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:00.887352  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.887359  663170 retry.go:31] will retry after 538.180139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:00.974577  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:00.974615  663170 retry.go:31] will retry after 415.665863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.122069  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 12:27:01.211110  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.211148  663170 retry.go:31] will retry after 829.770354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.364404  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:27:01.390708  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:27:01.425847  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 12:27:01.556576  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.556722  663170 retry.go:31] will retry after 744.642747ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:01.556805  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.556823  663170 retry.go:31] will retry after 690.057704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:01.583210  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.583247  663170 retry.go:31] will retry after 405.23663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:01.903941  663170 node_ready.go:53] error getting node "old-k8s-version-618033": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-618033": dial tcp 192.168.85.2:8443: connect: connection refused
	I0120 12:27:01.989334  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:27:02.041888  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 12:27:02.081278  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.081320  663170 retry.go:31] will retry after 941.982727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:02.135113  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.135151  663170 retry.go:31] will retry after 450.629194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.247518  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:27:02.302190  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 12:27:02.336794  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.336825  663170 retry.go:31] will retry after 1.174787233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:02.384162  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.384203  663170 retry.go:31] will retry after 1.183556458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.586303  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 12:27:02.662260  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:02.662322  663170 retry.go:31] will retry after 1.413480727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:03.023957  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 12:27:03.117337  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:03.117374  663170 retry.go:31] will retry after 1.679499016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:03.512572  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:27:03.568166  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 12:27:03.595069  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:03.595103  663170 retry.go:31] will retry after 839.161021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0120 12:27:03.648816  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:03.648850  663170 retry.go:31] will retry after 675.766125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.076687  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 12:27:04.147761  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.147798  663170 retry.go:31] will retry after 2.80568394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.324894  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 12:27:04.402816  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.402849  663170 retry.go:31] will retry after 951.259554ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.403274  663170 node_ready.go:53] error getting node "old-k8s-version-618033": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-618033": dial tcp 192.168.85.2:8443: connect: connection refused
	I0120 12:27:04.434561  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 12:27:04.508033  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.508071  663170 retry.go:31] will retry after 2.20700865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.797515  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 12:27:04.880328  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:04.880365  663170 retry.go:31] will retry after 1.302846526s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:05.354329  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 12:27:05.433734  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:05.433767  663170 retry.go:31] will retry after 2.730402576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:06.184042  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0120 12:27:06.269689  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:06.269725  663170 retry.go:31] will retry after 4.230833571s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:06.404455  663170 node_ready.go:53] error getting node "old-k8s-version-618033": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-618033": dial tcp 192.168.85.2:8443: connect: connection refused
	I0120 12:27:06.716656  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0120 12:27:06.791000  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:06.791037  663170 retry.go:31] will retry after 3.873216238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:06.953926  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0120 12:27:07.026909  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:07.026946  663170 retry.go:31] will retry after 2.249294821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:08.164697  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0120 12:27:08.334716  663170 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:08.334750  663170 retry.go:31] will retry after 5.697542223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0120 12:27:09.277324  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:27:10.500793  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 12:27:10.665214  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:27:14.033411  663170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:27:17.358066  663170 node_ready.go:49] node "old-k8s-version-618033" has status "Ready":"True"
	I0120 12:27:17.358089  663170 node_ready.go:38] duration metric: took 17.455197236s for node "old-k8s-version-618033" to be "Ready" ...
	I0120 12:27:17.358099  663170 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:27:17.441607  663170 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-vjbl2" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:17.605819  663170 pod_ready.go:93] pod "coredns-74ff55c5b-vjbl2" in "kube-system" namespace has status "Ready":"True"
	I0120 12:27:17.605897  663170 pod_ready.go:82] duration metric: took 164.199283ms for pod "coredns-74ff55c5b-vjbl2" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:17.605924  663170 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:17.640382  663170 pod_ready.go:93] pod "etcd-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
	I0120 12:27:17.640462  663170 pod_ready.go:82] duration metric: took 34.514694ms for pod "etcd-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:17.640495  663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:18.311838  663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.034473628s)
	I0120 12:27:18.639698  663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.138858793s)
	I0120 12:27:18.640080  663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.974836712s)
	I0120 12:27:18.640223  663170 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-618033"
	I0120 12:27:18.640180  663170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.606733796s)
	I0120 12:27:18.641489  663170 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-618033 addons enable metrics-server
	
	I0120 12:27:18.642854  663170 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 12:27:18.644180  663170 addons.go:514] duration metric: took 18.969824526s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 12:27:19.648084  663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:22.146943  663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:24.151010  663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:26.646613  663170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:27.146609  663170 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
	I0120 12:27:27.146637  663170 pod_ready.go:82] duration metric: took 9.506120108s for pod "kube-apiserver-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:27.146653  663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:27:29.155666  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:31.156289  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:33.657736  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:35.665731  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:38.158101  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:40.654646  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:43.155925  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:45.158031  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:47.657813  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:50.165941  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:52.654334  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:55.154290  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:57.653678  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:27:59.658499  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:02.153839  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:04.153924  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:06.656230  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:09.152931  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:11.155033  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:13.652567  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:15.653794  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:17.656923  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:20.153794  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:22.154276  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:24.652666  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:26.653240  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:28.653792  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:31.154010  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:33.652452  663170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:35.652633  663170 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
	I0120 12:28:35.652657  663170 pod_ready.go:82] duration metric: took 1m8.505964238s for pod "kube-controller-manager-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:28:35.652670  663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q2cdx" in "kube-system" namespace to be "Ready" ...
	I0120 12:28:35.658056  663170 pod_ready.go:93] pod "kube-proxy-q2cdx" in "kube-system" namespace has status "Ready":"True"
	I0120 12:28:35.658082  663170 pod_ready.go:82] duration metric: took 5.404269ms for pod "kube-proxy-q2cdx" in "kube-system" namespace to be "Ready" ...
	I0120 12:28:35.658095  663170 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:28:37.665722  663170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:40.165049  663170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:42.172985  663170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:42.664829  663170 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace has status "Ready":"True"
	I0120 12:28:42.664859  663170 pod_ready.go:82] duration metric: took 7.006756186s for pod "kube-scheduler-old-k8s-version-618033" in "kube-system" namespace to be "Ready" ...
	I0120 12:28:42.664872  663170 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace to be "Ready" ...
	I0120 12:28:44.671983  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:47.170635  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:49.172183  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:51.675108  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:53.675558  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:56.171952  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:28:58.175246  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:00.192941  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:02.671331  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:04.675940  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:07.171664  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:09.175340  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:11.671720  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:14.170833  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:16.172791  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:18.671698  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:21.171053  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:23.175126  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:25.670895  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:27.671401  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:30.176056  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:32.671628  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:34.675586  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:37.171351  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:39.171662  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:41.671072  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:43.671381  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:46.170595  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:48.175411  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:50.671807  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:53.177132  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:55.670815  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:57.670978  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:29:59.671514  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:01.672223  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:04.172077  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:06.671344  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:08.677088  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:11.172276  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:13.671449  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:16.172081  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:18.671313  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:20.671579  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:22.672719  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:25.172188  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:27.174389  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:29.671863  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:32.171468  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:34.671534  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:36.671809  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:39.171372  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:41.172094  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:43.177396  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:45.674333  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:48.171995  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:50.670614  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:52.671705  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:54.673264  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:57.170991  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:30:59.171272  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:01.172964  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:03.671222  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:06.171532  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:08.171887  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:10.172406  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:12.671553  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:15.172651  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:17.671240  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:19.672000  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:22.171027  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:24.171468  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:26.672609  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:29.172730  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:31.670890  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:33.671955  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:36.171491  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:38.172489  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:40.671650  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:43.171738  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:45.182052  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:47.672271  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:50.170716  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:52.172555  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:54.670689  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:56.675262  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:31:58.675720  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:01.172517  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:03.670815  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:06.172162  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:08.672118  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:11.172318  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:13.173945  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:15.176805  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:17.673799  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.173022  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.174529  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.671226  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.671707  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:28.676084  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.173025  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:33.175082  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.672357  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:38.172501  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.172959  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.174961  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.665891  663170 pod_ready.go:82] duration metric: took 4m0.000999177s for pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace to be "Ready" ...
	E0120 12:32:42.665923  663170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 12:32:42.665934  663170 pod_ready.go:39] duration metric: took 5m25.307823459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:32:42.665953  663170 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:32:42.665985  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:42.666060  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:42.761425  663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:42.761457  663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:42.761464  663170 cri.go:89] found id: ""
	I0120 12:32:42.761472  663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
	I0120 12:32:42.761530  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.766334  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.770402  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 12:32:42.770477  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:42.840870  663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:42.840890  663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:42.840895  663170 cri.go:89] found id: ""
	I0120 12:32:42.840902  663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
	I0120 12:32:42.840959  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.846031  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.850194  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 12:32:42.850260  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:42.904928  663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:42.904957  663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:42.904963  663170 cri.go:89] found id: ""
	I0120 12:32:42.904970  663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
	I0120 12:32:42.905025  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.909172  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.912704  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:42.912772  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:42.968944  663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:42.969015  663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:42.969035  663170 cri.go:89] found id: ""
	I0120 12:32:42.969061  663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
	I0120 12:32:42.969168  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.973579  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.978112  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:42.978252  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:43.050120  663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:43.050196  663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:43.050216  663170 cri.go:89] found id: ""
	I0120 12:32:43.050241  663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
	I0120 12:32:43.050338  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.054664  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.058589  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:43.058720  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:43.117777  663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:43.117802  663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:43.117807  663170 cri.go:89] found id: ""
	I0120 12:32:43.117814  663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
	I0120 12:32:43.117901  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.126390  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.136897  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:43.137072  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:43.200437  663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:43.200515  663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:43.200538  663170 cri.go:89] found id: ""
	I0120 12:32:43.200565  663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
	I0120 12:32:43.200662  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.204950  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.208929  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:43.209037  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:43.259134  663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:43.259192  663170 cri.go:89] found id: ""
	I0120 12:32:43.259224  663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
	I0120 12:32:43.259308  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.263374  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 12:32:43.263497  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 12:32:43.311336  663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:43.311398  663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:43.311427  663170 cri.go:89] found id: ""
	I0120 12:32:43.311452  663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
	I0120 12:32:43.311549  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.315630  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.319342  663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
	I0120 12:32:43.319422  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:43.372921  663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
	I0120 12:32:43.373003  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:43.427917  663170 logs.go:123] Gathering logs for containerd ...
	I0120 12:32:43.427995  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 12:32:43.498070  663170 logs.go:123] Gathering logs for container status ...
	I0120 12:32:43.498147  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:43.571418  663170 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:43.571498  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:32:43.636420  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553     655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.636782  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637034  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944     655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637271  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033     655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637517  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637864  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131     655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.638104  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.638355  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.646625  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.646855  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.650337  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.652382  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.652979  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430     655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
	W0120 12:32:43.653464  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.653826  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.654514  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.656989  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.657747  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.657954  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.658300  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.658511  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.658872  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.659081  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.659690  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.660040  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.662593  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.663687  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.664062  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.664277  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.664622  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.664827  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.665441  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.665804  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.666010  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.666361  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.666576  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.666934  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.667150  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.667504  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.670002  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.670427  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.670640  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.670990  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.671204  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.671841  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.672233  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.672442  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.672687  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.673044  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.673252  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.673646  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.673853  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.674203  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.674413  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.674775  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.674982  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.675330  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.675536  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.675891  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.676100  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.676456  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.676672  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:43.676711  663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
	I0120 12:32:43.676747  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:43.743941  663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
	I0120 12:32:43.743981  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:43.806114  663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
	I0120 12:32:43.806150  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:43.857911  663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
	I0120 12:32:43.857941  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:43.917003  663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
	I0120 12:32:43.917032  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:43.992709  663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
	I0120 12:32:43.992759  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:44.068689  663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
	I0120 12:32:44.068723  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:44.123499  663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
	I0120 12:32:44.123529  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:44.181810  663170 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:44.181838  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:44.204612  663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
	I0120 12:32:44.204641  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:44.262671  663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
	I0120 12:32:44.262704  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:44.313537  663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
	I0120 12:32:44.313569  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:44.385646  663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
	I0120 12:32:44.385744  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:44.474032  663170 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:44.474111  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 12:32:44.677528  663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
	I0120 12:32:44.677562  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:44.721616  663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
	I0120 12:32:44.721690  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:44.768059  663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
	I0120 12:32:44.768141  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:44.829786  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:44.829821  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 12:32:44.829881  663170 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0120 12:32:44.829892  663170 out.go:270]   Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:44.829917  663170 out.go:270]   Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	  Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:44.829953  663170 out.go:270]   Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:44.829961  663170 out.go:270]   Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	  Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:44.829967  663170 out.go:270]   Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:44.829972  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:44.829979  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:32:54.831056  663170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:54.842999  663170 api_server.go:72] duration metric: took 5m55.169056051s to wait for apiserver process to appear ...
	I0120 12:32:54.843025  663170 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:32:54.843060  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:54.843120  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:54.892331  663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:54.892355  663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:54.892360  663170 cri.go:89] found id: ""
	I0120 12:32:54.892367  663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
	I0120 12:32:54.892424  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.896167  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.899483  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 12:32:54.899551  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:54.947556  663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:54.947585  663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:54.947591  663170 cri.go:89] found id: ""
	I0120 12:32:54.947598  663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
	I0120 12:32:54.947656  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.951481  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.955038  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 12:32:54.955113  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:54.999061  663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:54.999094  663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:54.999099  663170 cri.go:89] found id: ""
	I0120 12:32:54.999106  663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
	I0120 12:32:54.999164  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.003398  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.006791  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:55.006865  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:55.053724  663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:55.053750  663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:55.053755  663170 cri.go:89] found id: ""
	I0120 12:32:55.053763  663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
	I0120 12:32:55.053826  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.057957  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.061739  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:55.061865  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:55.112602  663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:55.112625  663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:55.112631  663170 cri.go:89] found id: ""
	I0120 12:32:55.112638  663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
	I0120 12:32:55.112718  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.116611  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.121704  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:55.121779  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:55.181387  663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:55.181409  663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:55.181414  663170 cri.go:89] found id: ""
	I0120 12:32:55.181421  663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
	I0120 12:32:55.181497  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.186863  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.191042  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:55.191113  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:55.244409  663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:55.244442  663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:55.244449  663170 cri.go:89] found id: ""
	I0120 12:32:55.244456  663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
	I0120 12:32:55.244522  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.253198  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.260336  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 12:32:55.260427  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 12:32:55.307825  663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:55.307847  663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:55.307851  663170 cri.go:89] found id: ""
	I0120 12:32:55.307858  663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
	I0120 12:32:55.307925  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.311753  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.315323  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:55.315404  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:55.356240  663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:55.356269  663170 cri.go:89] found id: ""
	I0120 12:32:55.356277  663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
	I0120 12:32:55.356345  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.359958  663170 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:55.359984  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:32:55.418304  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553     655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.418614  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.418849  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944     655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419071  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033     655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419291  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419546  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131     655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419756  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419984  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.428109  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.428309  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.431722  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.433745  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.434318  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430     655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
	W0120 12:32:55.434787  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.435118  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.435792  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.438245  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.438973  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.439157  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.439485  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.439669  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.439998  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.440181  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.440772  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.441099  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.443603  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.443790  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.444120  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.444327  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.444659  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.444844  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.445435  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.445773  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.445961  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.446294  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.446482  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.446813  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.446998  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.447326  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.449780  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.450110  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.450297  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.450632  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.450817  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.451412  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.451745  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.451930  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.452114  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.452442  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.452627  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.452954  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.453138  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.453469  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.453659  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.453990  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.454174  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.454503  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.454690  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.455019  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.455203  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.455532  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.455716  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.456045  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.456231  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:55.456240  663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
	I0120 12:32:55.456257  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:55.498655  663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
	I0120 12:32:55.498685  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:55.545339  663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
	I0120 12:32:55.545367  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:55.695497  663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
	I0120 12:32:55.695578  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:55.790895  663170 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:55.790932  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:55.808465  663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
	I0120 12:32:55.808496  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:55.866823  663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
	I0120 12:32:55.866858  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:55.996274  663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
	I0120 12:32:55.996312  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:56.059035  663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
	I0120 12:32:56.059067  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:56.108806  663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
	I0120 12:32:56.108854  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:56.180797  663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
	I0120 12:32:56.180898  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:56.249831  663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
	I0120 12:32:56.249864  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:56.297821  663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
	I0120 12:32:56.297851  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:56.353347  663170 logs.go:123] Gathering logs for container status ...
	I0120 12:32:56.353381  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:56.414819  663170 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:56.414848  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 12:32:56.561358  663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
	I0120 12:32:56.561390  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:56.626001  663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
	I0120 12:32:56.626092  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:56.674576  663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
	I0120 12:32:56.674668  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:56.731078  663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
	I0120 12:32:56.731162  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:56.784777  663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
	I0120 12:32:56.784856  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:56.839707  663170 logs.go:123] Gathering logs for containerd ...
	I0120 12:32:56.839793  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 12:32:56.911951  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:56.911990  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 12:32:56.912046  663170 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0120 12:32:56.912063  663170 out.go:270]   Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:56.912071  663170 out.go:270]   Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	  Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:56.912084  663170 out.go:270]   Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:56.912099  663170 out.go:270]   Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	  Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:56.912124  663170 out.go:270]   Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:56.912129  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:56.912136  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:33:06.913477  663170 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0120 12:33:06.924185  663170 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0120 12:33:06.927401  663170 out.go:201] 
	W0120 12:33:06.930237  663170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0120 12:33:06.930282  663170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0120 12:33:06.930305  663170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0120 12:33:06.930314  663170 out.go:270] * 
	* 
	W0120 12:33:06.931223  663170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:33:06.933295  663170 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-618033
helpers_test.go:235: (dbg) docker inspect old-k8s-version-618033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889",
	        "Created": "2025-01-20T12:23:38.412020885Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 663372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-20T12:26:52.864273494Z",
	            "FinishedAt": "2025-01-20T12:26:51.948457928Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/hosts",
	        "LogPath": "/var/lib/docker/containers/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889/ec70bc7fcb97ca4ce8b74d85b09320cbb7c0651c0ae58874d562fbdaf31c1889-json.log",
	        "Name": "/old-k8s-version-618033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-618033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-618033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143-init/diff:/var/lib/docker/overlay2/edf43674e048a8839ae0b875f0e8c5a4a292c844ffe81a34a599fd5845eee425/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc4daebc41f34d91bb0542c240830738e250c018c88a571a986a3d2ba28de143/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-618033",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-618033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-618033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-618033",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-618033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d1406f09bb4fdce3719564352b76862ef42db982dc8c5453eb7eba1af7cecbf",
	            "SandboxKey": "/var/run/docker/netns/0d1406f09bb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-618033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "75fb228165d452b8040c2a15a4b962bf51901d1bfcb0a4891b16500820d18139",
	                    "EndpointID": "443240ebb9206c877dc78bd4fbcd8b0502dc30d9649e0f07ab64cc2f8b6dccb0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-618033",
	                        "ec70bc7fcb97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-618033 -n old-k8s-version-618033
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-618033 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-618033 logs -n 25: (2.148475644s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-152963                              | cert-expiration-152963       | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-env-236901                               | force-systemd-env-236901     | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-236901                            | force-systemd-env-236901     | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:22 UTC |
	| start   | -p cert-options-753716                                 | cert-options-753716          | jenkins | v1.35.0 | 20 Jan 25 12:22 UTC | 20 Jan 25 12:23 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-753716 ssh                                | cert-options-753716          | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-753716 -- sudo                         | cert-options-753716          | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-753716                                 | cert-options-753716          | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:23 UTC |
	| start   | -p old-k8s-version-618033                              | old-k8s-version-618033       | jenkins | v1.35.0 | 20 Jan 25 12:23 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-152963                              | cert-expiration-152963       | jenkins | v1.35.0 | 20 Jan 25 12:25 UTC | 20 Jan 25 12:26 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-152963                              | cert-expiration-152963       | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	| start   | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:27 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-618033        | old-k8s-version-618033       | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-618033                              | old-k8s-version-618033       | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-618033             | old-k8s-version-618033       | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC | 20 Jan 25 12:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-618033                              | old-k8s-version-618033       | jenkins | v1.35.0 | 20 Jan 25 12:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-800877  | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-800877       | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:27 UTC | 20 Jan 25 12:31 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-800877                           | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-800877 | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC | 20 Jan 25 12:32 UTC |
	|         | default-k8s-diff-port-800877                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-180778                                  | embed-certs-180778           | jenkins | v1.35.0 | 20 Jan 25 12:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:32:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:32:18.230102  672840 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:32:18.230478  672840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:32:18.230517  672840 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:18.230549  672840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:32:18.230822  672840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 12:32:18.231325  672840 out.go:352] Setting JSON to false
	I0120 12:32:18.232359  672840 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8084,"bootTime":1737368255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 12:32:18.232510  672840 start.go:139] virtualization:  
	I0120 12:32:18.238921  672840 out.go:177] * [embed-certs-180778] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 12:32:18.242366  672840 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 12:32:18.242443  672840 notify.go:220] Checking for updates...
	I0120 12:32:18.248801  672840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:32:18.252051  672840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 12:32:18.255101  672840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	I0120 12:32:18.258199  672840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 12:32:18.261213  672840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:32:18.264742  672840 config.go:182] Loaded profile config "old-k8s-version-618033": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0120 12:32:18.264881  672840 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:32:18.291471  672840 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 12:32:18.291597  672840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 12:32:18.348736  672840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:32:18.339174535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 12:32:18.348848  672840 docker.go:318] overlay module found
	I0120 12:32:18.352000  672840 out.go:177] * Using the docker driver based on user configuration
	I0120 12:32:18.354995  672840 start.go:297] selected driver: docker
	I0120 12:32:18.355024  672840 start.go:901] validating driver "docker" against <nil>
	I0120 12:32:18.355039  672840 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:32:18.355884  672840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 12:32:18.431057  672840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-20 12:32:18.42084733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 12:32:18.431276  672840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:32:18.431522  672840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:32:18.434534  672840 out.go:177] * Using Docker driver with root privileges
	I0120 12:32:18.437570  672840 cni.go:84] Creating CNI manager for ""
	I0120 12:32:18.437724  672840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 12:32:18.437735  672840 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 12:32:18.437827  672840 start.go:340] cluster config:
	{Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:32:18.443137  672840 out.go:177] * Starting "embed-certs-180778" primary control-plane node in "embed-certs-180778" cluster
	I0120 12:32:18.446116  672840 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 12:32:18.449059  672840 out.go:177] * Pulling base image v0.0.46 ...
	I0120 12:32:18.451934  672840 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:32:18.451999  672840 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
	I0120 12:32:18.452012  672840 cache.go:56] Caching tarball of preloaded images
	I0120 12:32:18.452025  672840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 12:32:18.452144  672840 preload.go:172] Found /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0120 12:32:18.452157  672840 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
	I0120 12:32:18.452280  672840 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/config.json ...
	I0120 12:32:18.452314  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/config.json: {Name:mk4b172d32fdfc0b2fc3a01d2d2117ddf63ff5ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:18.472642  672840 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0120 12:32:18.472668  672840 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0120 12:32:18.472682  672840 cache.go:227] Successfully downloaded all kic artifacts
	I0120 12:32:18.472715  672840 start.go:360] acquireMachinesLock for embed-certs-180778: {Name:mk5e06d24869773ea5a6026455c6dbb830cd62b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:32:18.472824  672840 start.go:364] duration metric: took 87.402µs to acquireMachinesLock for "embed-certs-180778"
	I0120 12:32:18.472857  672840 start.go:93] Provisioning new machine with config: &{Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:32:18.472934  672840 start.go:125] createHost starting for "" (driver="docker")
	I0120 12:32:17.673799  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:20.173022  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:22.174529  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:18.476306  672840 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0120 12:32:18.476570  672840 start.go:159] libmachine.API.Create for "embed-certs-180778" (driver="docker")
	I0120 12:32:18.476609  672840 client.go:168] LocalClient.Create starting
	I0120 12:32:18.476712  672840 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem
	I0120 12:32:18.476757  672840 main.go:141] libmachine: Decoding PEM data...
	I0120 12:32:18.476779  672840 main.go:141] libmachine: Parsing certificate...
	I0120 12:32:18.476836  672840 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem
	I0120 12:32:18.476858  672840 main.go:141] libmachine: Decoding PEM data...
	I0120 12:32:18.476869  672840 main.go:141] libmachine: Parsing certificate...
	I0120 12:32:18.477246  672840 cli_runner.go:164] Run: docker network inspect embed-certs-180778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0120 12:32:18.500724  672840 cli_runner.go:211] docker network inspect embed-certs-180778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0120 12:32:18.500809  672840 network_create.go:284] running [docker network inspect embed-certs-180778] to gather additional debugging logs...
	I0120 12:32:18.500836  672840 cli_runner.go:164] Run: docker network inspect embed-certs-180778
	W0120 12:32:18.518233  672840 cli_runner.go:211] docker network inspect embed-certs-180778 returned with exit code 1
	I0120 12:32:18.518266  672840 network_create.go:287] error running [docker network inspect embed-certs-180778]: docker network inspect embed-certs-180778: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-180778 not found
	I0120 12:32:18.518281  672840 network_create.go:289] output of [docker network inspect embed-certs-180778]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-180778 not found
	
	** /stderr **
	I0120 12:32:18.518383  672840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 12:32:18.537730  672840 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab00e182d66a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a6:06:fc:f6} reservation:<nil>}
	I0120 12:32:18.538122  672840 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f204b1132b59 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f7:a7:3b:33} reservation:<nil>}
	I0120 12:32:18.538486  672840 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1b8277a01988 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:87:7b:1a:fe} reservation:<nil>}
	I0120 12:32:18.539078  672840 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198fd00}
	I0120 12:32:18.539105  672840 network_create.go:124] attempt to create docker network embed-certs-180778 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0120 12:32:18.539176  672840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-180778 embed-certs-180778
	I0120 12:32:18.635541  672840 network_create.go:108] docker network embed-certs-180778 192.168.76.0/24 created
	I0120 12:32:18.635575  672840 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-180778" container
	I0120 12:32:18.635655  672840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0120 12:32:18.652648  672840 cli_runner.go:164] Run: docker volume create embed-certs-180778 --label name.minikube.sigs.k8s.io=embed-certs-180778 --label created_by.minikube.sigs.k8s.io=true
	I0120 12:32:18.677046  672840 oci.go:103] Successfully created a docker volume embed-certs-180778
	I0120 12:32:18.677189  672840 cli_runner.go:164] Run: docker run --rm --name embed-certs-180778-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-180778 --entrypoint /usr/bin/test -v embed-certs-180778:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0120 12:32:19.359315  672840 oci.go:107] Successfully prepared a docker volume embed-certs-180778
	I0120 12:32:19.359369  672840 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:32:19.359391  672840 kic.go:194] Starting extracting preloaded images to volume ...
	I0120 12:32:19.359463  672840 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-180778:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0120 12:32:24.671226  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:26.671707  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:24.277367  672840 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-180778:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.917864778s)
	I0120 12:32:24.277400  672840 kic.go:203] duration metric: took 4.918005792s to extract preloaded images to volume ...
	W0120 12:32:24.277541  672840 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0120 12:32:24.277681  672840 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0120 12:32:24.334120  672840 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-180778 --name embed-certs-180778 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-180778 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-180778 --network embed-certs-180778 --ip 192.168.76.2 --volume embed-certs-180778:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0120 12:32:24.728856  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Running}}
	I0120 12:32:24.749194  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
	I0120 12:32:24.779611  672840 cli_runner.go:164] Run: docker exec embed-certs-180778 stat /var/lib/dpkg/alternatives/iptables
	I0120 12:32:24.835782  672840 oci.go:144] the created container "embed-certs-180778" has a running status.
	I0120 12:32:24.835812  672840 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa...
	I0120 12:32:25.283807  672840 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0120 12:32:25.331387  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
	I0120 12:32:25.354013  672840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0120 12:32:25.354033  672840 kic_runner.go:114] Args: [docker exec --privileged embed-certs-180778 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0120 12:32:25.419570  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
	I0120 12:32:25.464748  672840 machine.go:93] provisionDockerMachine start ...
	I0120 12:32:25.464852  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:25.493057  672840 main.go:141] libmachine: Using SSH client type: native
	I0120 12:32:25.493383  672840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I0120 12:32:25.493403  672840 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 12:32:25.494070  672840 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36120->127.0.0.1:33474: read: connection reset by peer
	I0120 12:32:28.620940  672840 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-180778
	
	I0120 12:32:28.620963  672840 ubuntu.go:169] provisioning hostname "embed-certs-180778"
	I0120 12:32:28.621034  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:28.639238  672840 main.go:141] libmachine: Using SSH client type: native
	I0120 12:32:28.639506  672840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I0120 12:32:28.639525  672840 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-180778 && echo "embed-certs-180778" | sudo tee /etc/hostname
	I0120 12:32:28.783653  672840 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-180778
	
	I0120 12:32:28.783791  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:28.803537  672840 main.go:141] libmachine: Using SSH client type: native
	I0120 12:32:28.803785  672840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33474 <nil> <nil>}
	I0120 12:32:28.803802  672840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-180778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-180778/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-180778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:32:28.925771  672840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:32:28.925801  672840 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20151-446459/.minikube CaCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20151-446459/.minikube}
	I0120 12:32:28.925822  672840 ubuntu.go:177] setting up certificates
	I0120 12:32:28.925835  672840 provision.go:84] configureAuth start
	I0120 12:32:28.925902  672840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-180778
	I0120 12:32:28.944161  672840 provision.go:143] copyHostCerts
	I0120 12:32:28.944236  672840 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem, removing ...
	I0120 12:32:28.944251  672840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem
	I0120 12:32:28.944329  672840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/ca.pem (1082 bytes)
	I0120 12:32:28.944441  672840 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem, removing ...
	I0120 12:32:28.944453  672840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem
	I0120 12:32:28.944483  672840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/cert.pem (1123 bytes)
	I0120 12:32:28.944551  672840 exec_runner.go:144] found /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem, removing ...
	I0120 12:32:28.944564  672840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem
	I0120 12:32:28.944594  672840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20151-446459/.minikube/key.pem (1675 bytes)
	I0120 12:32:28.944860  672840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem org=jenkins.embed-certs-180778 san=[127.0.0.1 192.168.76.2 embed-certs-180778 localhost minikube]
	I0120 12:32:29.205235  672840 provision.go:177] copyRemoteCerts
	I0120 12:32:29.205308  672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:32:29.205380  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:29.223941  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:29.315345  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 12:32:29.344056  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 12:32:29.369582  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 12:32:29.395347  672840 provision.go:87] duration metric: took 469.498258ms to configureAuth
	I0120 12:32:29.395375  672840 ubuntu.go:193] setting minikube options for container-runtime
	I0120 12:32:29.395570  672840 config.go:182] Loaded profile config "embed-certs-180778": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:32:29.395577  672840 machine.go:96] duration metric: took 3.930805644s to provisionDockerMachine
	I0120 12:32:29.395583  672840 client.go:171] duration metric: took 10.918962629s to LocalClient.Create
	I0120 12:32:29.395597  672840 start.go:167] duration metric: took 10.919028614s to libmachine.API.Create "embed-certs-180778"
	I0120 12:32:29.395604  672840 start.go:293] postStartSetup for "embed-certs-180778" (driver="docker")
	I0120 12:32:29.395613  672840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:32:29.395663  672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:32:29.395708  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:29.412875  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:29.507688  672840 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:32:29.511275  672840 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0120 12:32:29.511317  672840 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0120 12:32:29.511329  672840 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0120 12:32:29.511337  672840 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0120 12:32:29.511348  672840 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/addons for local assets ...
	I0120 12:32:29.511414  672840 filesync.go:126] Scanning /home/jenkins/minikube-integration/20151-446459/.minikube/files for local assets ...
	I0120 12:32:29.511508  672840 filesync.go:149] local asset: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem -> 4518352.pem in /etc/ssl/certs
	I0120 12:32:29.511623  672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 12:32:29.521003  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /etc/ssl/certs/4518352.pem (1708 bytes)
	I0120 12:32:29.548519  672840 start.go:296] duration metric: took 152.900885ms for postStartSetup
	I0120 12:32:29.548899  672840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-180778
	I0120 12:32:29.568351  672840 profile.go:143] Saving config to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/config.json ...
	I0120 12:32:29.568653  672840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 12:32:29.568736  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:29.594166  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:29.687019  672840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0120 12:32:29.691619  672840 start.go:128] duration metric: took 11.218670352s to createHost
	I0120 12:32:29.691643  672840 start.go:83] releasing machines lock for "embed-certs-180778", held for 11.218804737s
	I0120 12:32:29.691715  672840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-180778
	I0120 12:32:29.709489  672840 ssh_runner.go:195] Run: cat /version.json
	I0120 12:32:29.709543  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:29.709853  672840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:32:29.709910  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:29.729911  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:29.730384  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:29.970590  672840 ssh_runner.go:195] Run: systemctl --version
	I0120 12:32:29.975076  672840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0120 12:32:29.980259  672840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0120 12:32:30.007527  672840 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0120 12:32:30.007611  672840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:32:30.096526  672840 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0120 12:32:30.096554  672840 start.go:495] detecting cgroup driver to use...
	I0120 12:32:30.096587  672840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0120 12:32:30.096663  672840 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0120 12:32:30.118294  672840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0120 12:32:30.133878  672840 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:32:30.134004  672840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:32:30.151267  672840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:32:30.176215  672840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:32:30.283238  672840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:32:30.392964  672840 docker.go:233] disabling docker service ...
	I0120 12:32:30.393089  672840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:32:30.416819  672840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:32:30.429232  672840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:32:30.525173  672840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:32:30.631168  672840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:32:30.643963  672840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:32:30.661241  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0120 12:32:30.678474  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0120 12:32:30.689781  672840 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0120 12:32:30.689900  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0120 12:32:30.701249  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:32:30.712146  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0120 12:32:30.723901  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0120 12:32:30.737958  672840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:32:30.748263  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0120 12:32:30.759698  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0120 12:32:30.771547  672840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0120 12:32:30.781827  672840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:32:30.791499  672840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:32:30.800701  672840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:32:30.883129  672840 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0120 12:32:31.019385  672840 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0120 12:32:31.019484  672840 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0120 12:32:31.023822  672840 start.go:563] Will wait 60s for crictl version
	I0120 12:32:31.023922  672840 ssh_runner.go:195] Run: which crictl
	I0120 12:32:31.027757  672840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:32:31.065859  672840 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0120 12:32:31.066031  672840 ssh_runner.go:195] Run: containerd --version
	I0120 12:32:31.096215  672840 ssh_runner.go:195] Run: containerd --version
	I0120 12:32:31.125478  672840 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...
	I0120 12:32:28.676084  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.173025  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:31.128610  672840 cli_runner.go:164] Run: docker network inspect embed-certs-180778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0120 12:32:31.145492  672840 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0120 12:32:31.149941  672840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:32:31.161236  672840 kubeadm.go:883] updating cluster {Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:32:31.161363  672840 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
	I0120 12:32:31.161429  672840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:32:31.209673  672840 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:32:31.209700  672840 containerd.go:534] Images already preloaded, skipping extraction
	I0120 12:32:31.209767  672840 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:32:31.248848  672840 containerd.go:627] all images are preloaded for containerd runtime.
	I0120 12:32:31.248872  672840 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:32:31.248881  672840 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.0 containerd true true} ...
	I0120 12:32:31.248974  672840 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-180778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:32:31.249044  672840 ssh_runner.go:195] Run: sudo crictl info
	I0120 12:32:31.288426  672840 cni.go:84] Creating CNI manager for ""
	I0120 12:32:31.288452  672840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 12:32:31.288464  672840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:32:31.288488  672840 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-180778 NodeName:embed-certs-180778 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:32:31.288608  672840 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-180778"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:32:31.288683  672840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:32:31.298015  672840 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:32:31.298087  672840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:32:31.306816  672840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0120 12:32:31.324959  672840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:32:31.343351  672840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 12:32:31.361358  672840 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0120 12:32:31.364914  672840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:32:31.376249  672840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:32:31.480080  672840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:32:31.496199  672840 certs.go:68] Setting up /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778 for IP: 192.168.76.2
	I0120 12:32:31.496264  672840 certs.go:194] generating shared ca certs ...
	I0120 12:32:31.496296  672840 certs.go:226] acquiring lock for ca certs: {Name:mkcccec907119c13813a959b3b756156d7101c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:31.496481  672840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key
	I0120 12:32:31.496532  672840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key
	I0120 12:32:31.496544  672840 certs.go:256] generating profile certs ...
	I0120 12:32:31.496602  672840 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.key
	I0120 12:32:31.496627  672840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.crt with IP's: []
	I0120 12:32:31.861389  672840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.crt ...
	I0120 12:32:31.861422  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.crt: {Name:mk66dcfeb372e631d7af648df9273c43dd55d4cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:31.861661  672840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.key ...
	I0120 12:32:31.861677  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/client.key: {Name:mkd48616a77e5a2dfc13cfc3ddf4fd58bd4a6424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:31.861776  672840 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e
	I0120 12:32:31.861795  672840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0120 12:32:32.923712  672840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e ...
	I0120 12:32:32.923820  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e: {Name:mk0b55627c76bb7f573a3e475c691c515fb20aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:32.924064  672840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e ...
	I0120 12:32:32.924103  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e: {Name:mk54f28b4d8e13dea001f14acd44dce2ad52e1d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:32.924277  672840 certs.go:381] copying /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt.4fcf774e -> /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt
	I0120 12:32:32.924413  672840 certs.go:385] copying /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key.4fcf774e -> /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key
	I0120 12:32:32.924531  672840 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key
	I0120 12:32:32.924574  672840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt with IP's: []
	I0120 12:32:33.463775  672840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt ...
	I0120 12:32:33.463815  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt: {Name:mk877d8de6f8929e80c2eea656c7efdb436d8404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:33.464724  672840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key ...
	I0120 12:32:33.464744  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key: {Name:mk08b2b89218b1d60bc83a4123c18929c147093c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:33.464965  672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem (1338 bytes)
	W0120 12:32:33.465016  672840 certs.go:480] ignoring /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835_empty.pem, impossibly tiny 0 bytes
	I0120 12:32:33.465029  672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 12:32:33.465066  672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/ca.pem (1082 bytes)
	I0120 12:32:33.465100  672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:32:33.465132  672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/certs/key.pem (1675 bytes)
	I0120 12:32:33.465183  672840 certs.go:484] found cert: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem (1708 bytes)
	I0120 12:32:33.465864  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:32:33.492047  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0120 12:32:33.518007  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:32:33.545473  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:32:33.572297  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 12:32:33.601752  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 12:32:33.627864  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:32:33.653053  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/embed-certs-180778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:32:33.680409  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:32:33.706251  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/certs/451835.pem --> /usr/share/ca-certificates/451835.pem (1338 bytes)
	I0120 12:32:33.731192  672840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/ssl/certs/4518352.pem --> /usr/share/ca-certificates/4518352.pem (1708 bytes)
	I0120 12:32:33.756251  672840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:32:33.774353  672840 ssh_runner.go:195] Run: openssl version
	I0120 12:32:33.781401  672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4518352.pem && ln -fs /usr/share/ca-certificates/4518352.pem /etc/ssl/certs/4518352.pem"
	I0120 12:32:33.791457  672840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4518352.pem
	I0120 12:32:33.795020  672840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 11:45 /usr/share/ca-certificates/4518352.pem
	I0120 12:32:33.795081  672840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4518352.pem
	I0120 12:32:33.802142  672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4518352.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 12:32:33.812609  672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:32:33.822137  672840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:32:33.827763  672840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 11:38 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:32:33.827835  672840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:32:33.835901  672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:32:33.847755  672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/451835.pem && ln -fs /usr/share/ca-certificates/451835.pem /etc/ssl/certs/451835.pem"
	I0120 12:32:33.858687  672840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/451835.pem
	I0120 12:32:33.862899  672840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 11:45 /usr/share/ca-certificates/451835.pem
	I0120 12:32:33.862968  672840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/451835.pem
	I0120 12:32:33.871401  672840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/451835.pem /etc/ssl/certs/51391683.0"
	I0120 12:32:33.882229  672840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:32:33.888186  672840 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:32:33.888283  672840 kubeadm.go:392] StartCluster: {Name:embed-certs-180778 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-180778 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:32:33.888376  672840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0120 12:32:33.888434  672840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:32:33.929525  672840 cri.go:89] found id: ""
	I0120 12:32:33.929657  672840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:32:33.939061  672840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:32:33.948035  672840 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0120 12:32:33.948156  672840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:32:33.957387  672840 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:32:33.957409  672840 kubeadm.go:157] found existing configuration files:
	
	I0120 12:32:33.957480  672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:32:33.966497  672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:32:33.966562  672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:32:33.975532  672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:32:33.984647  672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:32:33.984756  672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:32:33.993712  672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:32:34.002924  672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:32:34.002994  672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:32:34.015278  672840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:32:34.025440  672840 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:32:34.025545  672840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:32:34.035634  672840 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0120 12:32:34.082588  672840 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:32:34.082769  672840 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:32:34.110149  672840 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0120 12:32:34.110309  672840 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0120 12:32:34.110388  672840 kubeadm.go:310] OS: Linux
	I0120 12:32:34.110464  672840 kubeadm.go:310] CGROUPS_CPU: enabled
	I0120 12:32:34.110545  672840 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0120 12:32:34.110628  672840 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0120 12:32:34.110705  672840 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0120 12:32:34.110781  672840 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0120 12:32:34.110862  672840 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0120 12:32:34.110938  672840 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0120 12:32:34.111018  672840 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0120 12:32:34.111093  672840 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0120 12:32:34.181795  672840 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:32:34.181967  672840 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:32:34.182097  672840 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:32:34.188631  672840 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:32:33.175082  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:35.672357  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:34.192837  672840 out.go:235]   - Generating certificates and keys ...
	I0120 12:32:34.192949  672840 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:32:34.193023  672840 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:32:34.459338  672840 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:32:35.343439  672840 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:32:35.789919  672840 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:32:36.264291  672840 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:32:37.427984  672840 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:32:37.428391  672840 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-180778 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0120 12:32:38.116148  672840 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:32:38.116473  672840 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-180778 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0120 12:32:38.172501  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:40.172959  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:42.174961  663170 pod_ready.go:103] pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace has status "Ready":"False"
	I0120 12:32:39.001275  672840 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:32:40.122022  672840 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:32:40.477529  672840 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:32:40.477849  672840 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:32:40.949137  672840 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:32:41.285314  672840 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:32:41.578848  672840 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:32:42.240152  672840 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:32:43.021708  672840 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:32:43.022886  672840 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:32:43.026134  672840 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:32:43.029359  672840 out.go:235]   - Booting up control plane ...
	I0120 12:32:43.029462  672840 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:32:43.029539  672840 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:32:43.030911  672840 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:32:43.047099  672840 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:32:43.054424  672840 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:32:43.054496  672840 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:32:43.187520  672840 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:32:43.187647  672840 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:32:42.665891  663170 pod_ready.go:82] duration metric: took 4m0.000999177s for pod "metrics-server-9975d5f86-h8bg5" in "kube-system" namespace to be "Ready" ...
	E0120 12:32:42.665923  663170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 12:32:42.665934  663170 pod_ready.go:39] duration metric: took 5m25.307823459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:32:42.665953  663170 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:32:42.665985  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:42.666060  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:42.761425  663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:42.761457  663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:42.761464  663170 cri.go:89] found id: ""
	I0120 12:32:42.761472  663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
	I0120 12:32:42.761530  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.766334  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.770402  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 12:32:42.770477  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:42.840870  663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:42.840890  663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:42.840895  663170 cri.go:89] found id: ""
	I0120 12:32:42.840902  663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
	I0120 12:32:42.840959  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.846031  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.850194  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 12:32:42.850260  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:42.904928  663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:42.904957  663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:42.904963  663170 cri.go:89] found id: ""
	I0120 12:32:42.904970  663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
	I0120 12:32:42.905025  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.909172  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.912704  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:42.912772  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:42.968944  663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:42.969015  663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:42.969035  663170 cri.go:89] found id: ""
	I0120 12:32:42.969061  663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
	I0120 12:32:42.969168  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.973579  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:42.978112  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:42.978252  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:43.050120  663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:43.050196  663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:43.050216  663170 cri.go:89] found id: ""
	I0120 12:32:43.050241  663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
	I0120 12:32:43.050338  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.054664  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.058589  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:43.058720  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:43.117777  663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:43.117802  663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:43.117807  663170 cri.go:89] found id: ""
	I0120 12:32:43.117814  663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
	I0120 12:32:43.117901  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.126390  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.136897  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:43.137072  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:43.200437  663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:43.200515  663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:43.200538  663170 cri.go:89] found id: ""
	I0120 12:32:43.200565  663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
	I0120 12:32:43.200662  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.204950  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.208929  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:43.209037  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:43.259134  663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:43.259192  663170 cri.go:89] found id: ""
	I0120 12:32:43.259224  663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
	I0120 12:32:43.259308  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.263374  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 12:32:43.263497  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 12:32:43.311336  663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:43.311398  663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:43.311427  663170 cri.go:89] found id: ""
	I0120 12:32:43.311452  663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
	I0120 12:32:43.311549  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.315630  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:43.319342  663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
	I0120 12:32:43.319422  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:43.372921  663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
	I0120 12:32:43.373003  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:43.427917  663170 logs.go:123] Gathering logs for containerd ...
	I0120 12:32:43.427995  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 12:32:43.498070  663170 logs.go:123] Gathering logs for container status ...
	I0120 12:32:43.498147  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:43.571418  663170 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:43.571498  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:32:43.636420  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553     655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.636782  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637034  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944     655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637271  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033     655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637517  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.637864  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131     655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.638104  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.638355  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:43.646625  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.646855  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.650337  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.652382  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.652979  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430     655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
	W0120 12:32:43.653464  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.653826  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.654514  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.656989  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.657747  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.657954  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.658300  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.658511  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.658872  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.659081  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.659690  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.660040  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.662593  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.663687  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.664062  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.664277  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.664622  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.664827  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.665441  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.665804  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.666010  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.666361  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.666576  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.666934  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.667150  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.667504  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.670002  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:43.670427  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.670640  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.670990  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.671204  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.671841  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.672233  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.672442  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.672687  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.673044  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.673252  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.673646  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.673853  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.674203  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.674413  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.674775  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.674982  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.675330  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.675536  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.675891  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.676100  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:43.676456  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:43.676672  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:43.676711  663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
	I0120 12:32:43.676747  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:43.743941  663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
	I0120 12:32:43.743981  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:43.806114  663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
	I0120 12:32:43.806150  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:43.857911  663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
	I0120 12:32:43.857941  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:43.917003  663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
	I0120 12:32:43.917032  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:43.992709  663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
	I0120 12:32:43.992759  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:44.068689  663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
	I0120 12:32:44.068723  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:44.123499  663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
	I0120 12:32:44.123529  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:44.181810  663170 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:44.181838  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:44.204612  663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
	I0120 12:32:44.204641  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:44.262671  663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
	I0120 12:32:44.262704  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:44.313537  663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
	I0120 12:32:44.313569  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:44.385646  663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
	I0120 12:32:44.385744  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:44.474032  663170 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:44.474111  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 12:32:44.677528  663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
	I0120 12:32:44.677562  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:44.721616  663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
	I0120 12:32:44.721690  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:44.768059  663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
	I0120 12:32:44.768141  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:44.829786  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:44.829821  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 12:32:44.829881  663170 out.go:270] X Problems detected in kubelet:
	W0120 12:32:44.829892  663170 out.go:270]   Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:44.829917  663170 out.go:270]   Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:44.829953  663170 out.go:270]   Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:44.829961  663170 out.go:270]   Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:44.829967  663170 out.go:270]   Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:44.829972  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:44.829979  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:32:44.687318  672840 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500952428s
	I0120 12:32:44.687407  672840 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:32:51.189382  672840 kubeadm.go:310] [api-check] The API server is healthy after 6.502060656s
	I0120 12:32:51.215697  672840 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:32:51.235012  672840 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:32:51.263519  672840 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:32:51.263728  672840 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-180778 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:32:51.276027  672840 kubeadm.go:310] [bootstrap-token] Using token: vydbii.6x4lt3eagn7amsg9
	I0120 12:32:51.278964  672840 out.go:235]   - Configuring RBAC rules ...
	I0120 12:32:51.279099  672840 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:32:51.286276  672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:32:51.297527  672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:32:51.301940  672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:32:51.306375  672840 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:32:51.311523  672840 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:32:51.606710  672840 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:32:52.046119  672840 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:32:52.598023  672840 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:32:52.599239  672840 kubeadm.go:310] 
	I0120 12:32:52.599327  672840 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:32:52.599343  672840 kubeadm.go:310] 
	I0120 12:32:52.599422  672840 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:32:52.599432  672840 kubeadm.go:310] 
	I0120 12:32:52.599458  672840 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:32:52.599521  672840 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:32:52.599577  672840 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:32:52.599585  672840 kubeadm.go:310] 
	I0120 12:32:52.599638  672840 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:32:52.599646  672840 kubeadm.go:310] 
	I0120 12:32:52.599694  672840 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:32:52.599702  672840 kubeadm.go:310] 
	I0120 12:32:52.599754  672840 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:32:52.599833  672840 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:32:52.599904  672840 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:32:52.599916  672840 kubeadm.go:310] 
	I0120 12:32:52.600000  672840 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:32:52.600080  672840 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:32:52.600088  672840 kubeadm.go:310] 
	I0120 12:32:52.600176  672840 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vydbii.6x4lt3eagn7amsg9 \
	I0120 12:32:52.600284  672840 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cf58d6b4df152431c4946a83dccf7fb472b0285b6e4dd4c00154a1eb2bb479b5 \
	I0120 12:32:52.600308  672840 kubeadm.go:310] 	--control-plane 
	I0120 12:32:52.600316  672840 kubeadm.go:310] 
	I0120 12:32:52.600401  672840 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:32:52.600410  672840 kubeadm.go:310] 
	I0120 12:32:52.600492  672840 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vydbii.6x4lt3eagn7amsg9 \
	I0120 12:32:52.600600  672840 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cf58d6b4df152431c4946a83dccf7fb472b0285b6e4dd4c00154a1eb2bb479b5 
	I0120 12:32:52.605496  672840 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0120 12:32:52.605767  672840 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0120 12:32:52.605943  672840 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:32:52.605981  672840 cni.go:84] Creating CNI manager for ""
	I0120 12:32:52.605991  672840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 12:32:52.609312  672840 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0120 12:32:52.612206  672840 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 12:32:52.616083  672840 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 12:32:52.616104  672840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0120 12:32:52.636124  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 12:32:52.946561  672840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:32:52.946707  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:52.946789  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-180778 minikube.k8s.io/updated_at=2025_01_20T12_32_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9 minikube.k8s.io/name=embed-certs-180778 minikube.k8s.io/primary=true
	I0120 12:32:53.119011  672840 ops.go:34] apiserver oom_adj: -16
	I0120 12:32:53.119124  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:53.619448  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:54.119835  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:54.619725  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:55.119402  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:55.619208  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:56.119334  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:56.620058  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:57.120025  672840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:32:57.339155  672840 kubeadm.go:1113] duration metric: took 4.39249512s to wait for elevateKubeSystemPrivileges
	I0120 12:32:57.339183  672840 kubeadm.go:394] duration metric: took 23.450905389s to StartCluster
	I0120 12:32:57.339205  672840 settings.go:142] acquiring lock: {Name:mka92edde1befc8914a01871e41167ef1a7b90c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:57.339266  672840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 12:32:57.340658  672840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20151-446459/kubeconfig: {Name:mkd202431392e920a92afeece62697072b25ee29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:32:57.340876  672840 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0120 12:32:57.340959  672840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 12:32:57.341194  672840 config.go:182] Loaded profile config "embed-certs-180778": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:32:57.341227  672840 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 12:32:57.341284  672840 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-180778"
	I0120 12:32:57.341298  672840 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-180778"
	I0120 12:32:57.341319  672840 host.go:66] Checking if "embed-certs-180778" exists ...
	I0120 12:32:57.341978  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
	I0120 12:32:57.342482  672840 addons.go:69] Setting default-storageclass=true in profile "embed-certs-180778"
	I0120 12:32:57.342502  672840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-180778"
	I0120 12:32:57.342796  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
	I0120 12:32:57.346015  672840 out.go:177] * Verifying Kubernetes components...
	I0120 12:32:57.354182  672840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:32:57.390410  672840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:32:54.831056  663170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:32:54.842999  663170 api_server.go:72] duration metric: took 5m55.169056051s to wait for apiserver process to appear ...
	I0120 12:32:54.843025  663170 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:32:54.843060  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0120 12:32:54.843120  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 12:32:54.892331  663170 cri.go:89] found id: "5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:54.892355  663170 cri.go:89] found id: "6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:54.892360  663170 cri.go:89] found id: ""
	I0120 12:32:54.892367  663170 logs.go:282] 2 containers: [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15]
	I0120 12:32:54.892424  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.896167  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.899483  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0120 12:32:54.899551  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 12:32:54.947556  663170 cri.go:89] found id: "d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:54.947585  663170 cri.go:89] found id: "4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:54.947591  663170 cri.go:89] found id: ""
	I0120 12:32:54.947598  663170 logs.go:282] 2 containers: [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf]
	I0120 12:32:54.947656  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.951481  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:54.955038  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0120 12:32:54.955113  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 12:32:54.999061  663170 cri.go:89] found id: "b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:54.999094  663170 cri.go:89] found id: "31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:54.999099  663170 cri.go:89] found id: ""
	I0120 12:32:54.999106  663170 logs.go:282] 2 containers: [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075]
	I0120 12:32:54.999164  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.003398  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.006791  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0120 12:32:55.006865  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 12:32:55.053724  663170 cri.go:89] found id: "d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:55.053750  663170 cri.go:89] found id: "758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:55.053755  663170 cri.go:89] found id: ""
	I0120 12:32:55.053763  663170 logs.go:282] 2 containers: [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0]
	I0120 12:32:55.053826  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.057957  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.061739  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0120 12:32:55.061865  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 12:32:55.112602  663170 cri.go:89] found id: "3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:55.112625  663170 cri.go:89] found id: "a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:55.112631  663170 cri.go:89] found id: ""
	I0120 12:32:55.112638  663170 logs.go:282] 2 containers: [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03]
	I0120 12:32:55.112718  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.116611  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.121704  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 12:32:55.121779  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 12:32:55.181387  663170 cri.go:89] found id: "beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:55.181409  663170 cri.go:89] found id: "8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:55.181414  663170 cri.go:89] found id: ""
	I0120 12:32:55.181421  663170 logs.go:282] 2 containers: [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2]
	I0120 12:32:55.181497  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.186863  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.191042  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0120 12:32:55.191113  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 12:32:55.244409  663170 cri.go:89] found id: "a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:55.244442  663170 cri.go:89] found id: "2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:55.244449  663170 cri.go:89] found id: ""
	I0120 12:32:55.244456  663170 logs.go:282] 2 containers: [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10]
	I0120 12:32:55.244522  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.253198  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.260336  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0120 12:32:55.260427  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0120 12:32:55.307825  663170 cri.go:89] found id: "2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:55.307847  663170 cri.go:89] found id: "fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:55.307851  663170 cri.go:89] found id: ""
	I0120 12:32:55.307858  663170 logs.go:282] 2 containers: [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224]
	I0120 12:32:55.307925  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.311753  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.315323  663170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 12:32:55.315404  663170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 12:32:55.356240  663170 cri.go:89] found id: "d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:55.356269  663170 cri.go:89] found id: ""
	I0120 12:32:55.356277  663170 logs.go:282] 1 containers: [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6]
	I0120 12:32:55.356345  663170 ssh_runner.go:195] Run: which crictl
	I0120 12:32:55.359958  663170 logs.go:123] Gathering logs for kubelet ...
	I0120 12:32:55.359984  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0120 12:32:55.418304  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.205553     655 reflector.go:138] object-"kube-system"/"coredns-token-brbgd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-brbgd" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.418614  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.218010     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.418849  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.305944     655 reflector.go:138] object-"kube-system"/"metrics-server-token-t7n5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t7n5d" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419071  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306033     655 reflector.go:138] object-"kube-system"/"kindnet-token-htldq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-htldq" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419291  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306082     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-85wbm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85wbm" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419546  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306131     655 reflector.go:138] object-"default"/"default-token-pngw5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pngw5" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419756  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306180     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.419984  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:17 old-k8s-version-618033 kubelet[655]: E0120 12:27:17.306224     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-fgdsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-fgdsf" is forbidden: User "system:node:old-k8s-version-618033" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-618033' and this object
	W0120 12:32:55.428109  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.721324     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.428309  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:19 old-k8s-version-618033 kubelet[655]: E0120 12:27:19.754709     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.431722  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:35 old-k8s-version-618033 kubelet[655]: E0120 12:27:35.610171     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.433745  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.614294     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.434318  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:49 old-k8s-version-618033 kubelet[655]: E0120 12:27:49.976430     655 pod_workers.go:191] Error syncing pod 7614f8ae-aae6-4203-96ff-40a900278cf6 ("storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7614f8ae-aae6-4203-96ff-40a900278cf6)"
	W0120 12:32:55.434787  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:50 old-k8s-version-618033 kubelet[655]: E0120 12:27:50.989213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.435118  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:51 old-k8s-version-618033 kubelet[655]: E0120 12:27:51.992870     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.435792  663170 logs.go:138] Found kubelet problem: Jan 20 12:27:58 old-k8s-version-618033 kubelet[655]: E0120 12:27:58.673483     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.438245  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:00 old-k8s-version-618033 kubelet[655]: E0120 12:28:00.612213     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.438973  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:12 old-k8s-version-618033 kubelet[655]: E0120 12:28:12.084652     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.439157  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:13 old-k8s-version-618033 kubelet[655]: E0120 12:28:13.597787     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.439485  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:18 old-k8s-version-618033 kubelet[655]: E0120 12:28:18.673267     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.439669  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:26 old-k8s-version-618033 kubelet[655]: E0120 12:28:26.596529     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.439998  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:30 old-k8s-version-618033 kubelet[655]: E0120 12:28:30.596223     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.440181  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:40 old-k8s-version-618033 kubelet[655]: E0120 12:28:40.596632     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.440772  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:43 old-k8s-version-618033 kubelet[655]: E0120 12:28:43.165668     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.441099  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:48 old-k8s-version-618033 kubelet[655]: E0120 12:28:48.673706     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.443603  663170 logs.go:138] Found kubelet problem: Jan 20 12:28:51 old-k8s-version-618033 kubelet[655]: E0120 12:28:51.610383     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.443790  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:03 old-k8s-version-618033 kubelet[655]: E0120 12:29:03.602577     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.444120  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:04 old-k8s-version-618033 kubelet[655]: E0120 12:29:04.596213     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.444327  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:17 old-k8s-version-618033 kubelet[655]: E0120 12:29:17.597227     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.444659  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:18 old-k8s-version-618033 kubelet[655]: E0120 12:29:18.596696     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.444844  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:30 old-k8s-version-618033 kubelet[655]: E0120 12:29:30.596660     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.445435  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:33 old-k8s-version-618033 kubelet[655]: E0120 12:29:33.299251     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.445773  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:38 old-k8s-version-618033 kubelet[655]: E0120 12:29:38.673765     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.445961  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:42 old-k8s-version-618033 kubelet[655]: E0120 12:29:42.596621     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.446294  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:49 old-k8s-version-618033 kubelet[655]: E0120 12:29:49.596280     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.446482  663170 logs.go:138] Found kubelet problem: Jan 20 12:29:57 old-k8s-version-618033 kubelet[655]: E0120 12:29:57.598023     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.446813  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:03 old-k8s-version-618033 kubelet[655]: E0120 12:30:03.596329     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.446998  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:10 old-k8s-version-618033 kubelet[655]: E0120 12:30:10.596520     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.447326  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:14 old-k8s-version-618033 kubelet[655]: E0120 12:30:14.596119     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.449780  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:22 old-k8s-version-618033 kubelet[655]: E0120 12:30:22.605228     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0120 12:32:55.450110  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:28 old-k8s-version-618033 kubelet[655]: E0120 12:30:28.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.450297  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:36 old-k8s-version-618033 kubelet[655]: E0120 12:30:36.596791     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.450632  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:41 old-k8s-version-618033 kubelet[655]: E0120 12:30:41.596271     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.450817  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:49 old-k8s-version-618033 kubelet[655]: E0120 12:30:49.600904     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.451412  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:54 old-k8s-version-618033 kubelet[655]: E0120 12:30:54.524938     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.451745  663170 logs.go:138] Found kubelet problem: Jan 20 12:30:58 old-k8s-version-618033 kubelet[655]: E0120 12:30:58.673184     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.451930  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:00 old-k8s-version-618033 kubelet[655]: E0120 12:31:00.596643     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.452114  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:12 old-k8s-version-618033 kubelet[655]: E0120 12:31:12.596594     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.452442  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:13 old-k8s-version-618033 kubelet[655]: E0120 12:31:13.596456     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.452627  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:24 old-k8s-version-618033 kubelet[655]: E0120 12:31:24.596574     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.452954  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:25 old-k8s-version-618033 kubelet[655]: E0120 12:31:25.596263     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.453138  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.453469  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.453659  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.453990  663170 logs.go:138] Found kubelet problem: Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.454174  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.454503  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.454690  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.455019  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.455203  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.455532  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.455716  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:55.456045  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:55.456231  663170 logs.go:138] Found kubelet problem: Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:55.456240  663170 logs.go:123] Gathering logs for coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] ...
	I0120 12:32:55.456257  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5"
	I0120 12:32:55.498655  663170 logs.go:123] Gathering logs for kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] ...
	I0120 12:32:55.498685  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03"
	I0120 12:32:55.545339  663170 logs.go:123] Gathering logs for kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] ...
	I0120 12:32:55.545367  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0"
	I0120 12:32:55.695497  663170 logs.go:123] Gathering logs for kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] ...
	I0120 12:32:55.695578  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2"
	I0120 12:32:55.790895  663170 logs.go:123] Gathering logs for dmesg ...
	I0120 12:32:55.790932  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 12:32:55.808465  663170 logs.go:123] Gathering logs for kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] ...
	I0120 12:32:55.808496  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a"
	I0120 12:32:55.866823  663170 logs.go:123] Gathering logs for kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] ...
	I0120 12:32:55.866858  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15"
	I0120 12:32:55.996274  663170 logs.go:123] Gathering logs for kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] ...
	I0120 12:32:55.996312  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e"
	I0120 12:32:56.059035  663170 logs.go:123] Gathering logs for storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] ...
	I0120 12:32:56.059067  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d"
	I0120 12:32:56.108806  663170 logs.go:123] Gathering logs for etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] ...
	I0120 12:32:56.108854  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7"
	I0120 12:32:56.180797  663170 logs.go:123] Gathering logs for kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] ...
	I0120 12:32:56.180898  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3"
	I0120 12:32:56.249831  663170 logs.go:123] Gathering logs for storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] ...
	I0120 12:32:56.249864  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224"
	I0120 12:32:56.297821  663170 logs.go:123] Gathering logs for kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] ...
	I0120 12:32:56.297851  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6"
	I0120 12:32:56.353347  663170 logs.go:123] Gathering logs for container status ...
	I0120 12:32:56.353381  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 12:32:56.414819  663170 logs.go:123] Gathering logs for describe nodes ...
	I0120 12:32:56.414848  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0120 12:32:56.561358  663170 logs.go:123] Gathering logs for etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] ...
	I0120 12:32:56.561390  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf"
	I0120 12:32:56.626001  663170 logs.go:123] Gathering logs for coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] ...
	I0120 12:32:56.626092  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075"
	I0120 12:32:56.674576  663170 logs.go:123] Gathering logs for kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] ...
	I0120 12:32:56.674668  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0"
	I0120 12:32:56.731078  663170 logs.go:123] Gathering logs for kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] ...
	I0120 12:32:56.731162  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352"
	I0120 12:32:56.784777  663170 logs.go:123] Gathering logs for kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] ...
	I0120 12:32:56.784856  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10"
	I0120 12:32:56.839707  663170 logs.go:123] Gathering logs for containerd ...
	I0120 12:32:56.839793  663170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0120 12:32:56.911951  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:56.911990  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0120 12:32:56.912046  663170 out.go:270] X Problems detected in kubelet:
	W0120 12:32:56.912063  663170 out.go:270]   Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:56.912071  663170 out.go:270]   Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:56.912084  663170 out.go:270]   Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0120 12:32:56.912099  663170 out.go:270]   Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	W0120 12:32:56.912124  663170 out.go:270]   Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0120 12:32:56.912129  663170 out.go:358] Setting ErrFile to fd 2...
	I0120 12:32:56.912136  663170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:32:57.392100  672840 addons.go:238] Setting addon default-storageclass=true in "embed-certs-180778"
	I0120 12:32:57.392142  672840 host.go:66] Checking if "embed-certs-180778" exists ...
	I0120 12:32:57.392568  672840 cli_runner.go:164] Run: docker container inspect embed-certs-180778 --format={{.State.Status}}
	I0120 12:32:57.397067  672840 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:32:57.397089  672840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:32:57.397155  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:57.427311  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:57.434064  672840 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:32:57.434092  672840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:32:57.434166  672840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-180778
	I0120 12:32:57.460556  672840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33474 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/embed-certs-180778/id_rsa Username:docker}
	I0120 12:32:57.797407  672840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:32:57.797553  672840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:32:57.802382  672840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:32:57.862359  672840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:32:58.492584  672840 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0120 12:32:58.495181  672840 node_ready.go:35] waiting up to 6m0s for node "embed-certs-180778" to be "Ready" ...
	I0120 12:32:58.518417  672840 node_ready.go:49] node "embed-certs-180778" has status "Ready":"True"
	I0120 12:32:58.518447  672840 node_ready.go:38] duration metric: took 23.23269ms for node "embed-certs-180778" to be "Ready" ...
	I0120 12:32:58.518459  672840 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:32:58.529809  672840 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace to be "Ready" ...
	I0120 12:32:58.756753  672840 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 12:32:58.759666  672840 addons.go:514] duration metric: took 1.418427551s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 12:32:58.997764  672840 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-180778" context rescaled to 1 replicas
	I0120 12:32:59.532719  672840 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-2n425" not found
	I0120 12:32:59.532751  672840 pod_ready.go:82] duration metric: took 1.00290741s for pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace to be "Ready" ...
	E0120 12:32:59.532764  672840 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-2n425" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-2n425" not found
	I0120 12:32:59.532771  672840 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fkxfj" in "kube-system" namespace to be "Ready" ...
	I0120 12:33:01.540135  672840 pod_ready.go:103] pod "coredns-668d6bf9bc-fkxfj" in "kube-system" namespace has status "Ready":"False"
	I0120 12:33:06.913477  663170 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0120 12:33:06.924185  663170 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0120 12:33:06.927401  663170 out.go:201] 
	W0120 12:33:06.930237  663170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0120 12:33:06.930282  663170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0120 12:33:06.930305  663170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0120 12:33:06.930314  663170 out.go:270] * 
	W0120 12:33:06.931223  663170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 12:33:06.933295  663170 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b4c7a3b420fff       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   7f3e7ce10ee34       dashboard-metrics-scraper-8d5bb5db8-jmvh6
	2dbb0b8040357       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   07ad5812adf0d       storage-provisioner
	d698a9d5733df       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   7923798dc5940       kubernetes-dashboard-cd95d586-g46zv
	3ae3ce774b5dc       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   72ce9fced84f4       kube-proxy-q2cdx
	b03ba2b22cc03       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   1ef3dcbc90b3a       coredns-74ff55c5b-vjbl2
	fcc769c7e3726       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   07ad5812adf0d       storage-provisioner
	7453f2b338621       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   64c7edb219399       busybox
	a6dfb5f612403       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   1b5b4fb5da9ad       kindnet-vjzbq
	beff5ecb54dc9       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   fad4792077568       kube-controller-manager-old-k8s-version-618033
	d8f6fdcd0e3fb       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   54125e5534c82       kube-scheduler-old-k8s-version-618033
	5d4812f61b58d       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   50ae1022e6f18       kube-apiserver-old-k8s-version-618033
	d0d87daa0a46e       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   0a1200b6c9c33       etcd-old-k8s-version-618033
	82983b3baf56c       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   dae0724dcb014       busybox
	31e7ecd06558c       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   3b8ad40fcaba9       coredns-74ff55c5b-vjbl2
	2927f71245812       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   46547b72b9275       kindnet-vjzbq
	a14330fd1aa84       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   d11c3ba6a5027       kube-proxy-q2cdx
	8950cdd4d5874       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   3977abd27cda1       kube-controller-manager-old-k8s-version-618033
	758444c7d1ae5       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   4f2ca3cd67a7c       kube-scheduler-old-k8s-version-618033
	4ec4dad53941b       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   5b656e31552bc       etcd-old-k8s-version-618033
	6a26c537f8dc2       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   a82573898c09f       kube-apiserver-old-k8s-version-618033
	
	
	==> containerd <==
	Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.706083575Z" level=info msg="StartContainer for \"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" returns successfully"
	Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.706278677Z" level=info msg="received exit event container_id:\"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" id:\"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" pid:3068 exit_status:255 exited_at:{seconds:1737376172 nanos:705149128}"
	Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.736834692Z" level=info msg="shim disconnected" id=d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594 namespace=k8s.io
	Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.736897642Z" level=warning msg="cleaning up after shim disconnected" id=d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594 namespace=k8s.io
	Jan 20 12:29:32 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:32.736912239Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 12:29:33 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:33.300767068Z" level=info msg="RemoveContainer for \"83e12fc09c110a83b84d726d73830db941995b77fca31381c9cf5418ad46d446\""
	Jan 20 12:29:33 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:29:33.308560883Z" level=info msg="RemoveContainer for \"83e12fc09c110a83b84d726d73830db941995b77fca31381c9cf5418ad46d446\" returns successfully"
	Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.596997059Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.602592657Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.604647825Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jan 20 12:30:22 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:22.604682131Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.599668356Z" level=info msg="CreateContainer within sandbox \"7f3e7ce10ee34f5d04c421ccacc494cbc4d32135d123b9886d5da4f82a54216a\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.620215824Z" level=info msg="CreateContainer within sandbox \"7f3e7ce10ee34f5d04c421ccacc494cbc4d32135d123b9886d5da4f82a54216a\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\""
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.621038164Z" level=info msg="StartContainer for \"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\""
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.691919954Z" level=info msg="StartContainer for \"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\" returns successfully"
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.692076747Z" level=info msg="received exit event container_id:\"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\" id:\"b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a\" pid:3299 exit_status:255 exited_at:{seconds:1737376253 nanos:691148421}"
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.718036139Z" level=info msg="shim disconnected" id=b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a namespace=k8s.io
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.718100903Z" level=warning msg="cleaning up after shim disconnected" id=b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a namespace=k8s.io
	Jan 20 12:30:53 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:53.718112316Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 20 12:30:54 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:54.529791856Z" level=info msg="RemoveContainer for \"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\""
	Jan 20 12:30:54 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:30:54.536126848Z" level=info msg="RemoveContainer for \"d42dca91b83b0016d7d5dd886bc9390e81adeb42488b0dadeed17c3e222e4594\" returns successfully"
	Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.597564332Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.605004907Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.607103358Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jan 20 12:33:05 old-k8s-version-618033 containerd[566]: time="2025-01-20T12:33:05.607108585Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [31e7ecd06558cde4f80a940c8ebbdb034b65ac240782a634b71d4e8dd9f66075] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51384 - 38234 "HINFO IN 4655488605509180788.6412446353322546485. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033651761s
	
	
	==> coredns [b03ba2b22cc03d644cdc4eeb59ce274c4f61fb0500a08bf65840d1ea7e8c30d5] <==
	I0120 12:27:50.078581       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 12:27:20.077163535 +0000 UTC m=+0.025399956) (total time: 30.001216086s):
	Trace[2019727887]: [30.001216086s] [30.001216086s] END
	E0120 12:27:50.078819       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0120 12:27:50.079062       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 12:27:20.077242214 +0000 UTC m=+0.025478627) (total time: 30.001804636s):
	Trace[939984059]: [30.001804636s] [30.001804636s] END
	E0120 12:27:50.079072       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0120 12:27:50.081658       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-20 12:27:20.077156191 +0000 UTC m=+0.025392612) (total time: 30.004409395s):
	Trace[911902081]: [30.004409395s] [30.004409395s] END
	E0120 12:27:50.081679       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42584 - 9585 "HINFO IN 267077671365996973.267405783250503531. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011601665s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-618033
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-618033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77d80cf1517f5f1439721b28711982314b21bec9
	                    minikube.k8s.io/name=old-k8s-version-618033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_24_18_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:24:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-618033
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:33:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:28:07 +0000   Mon, 20 Jan 2025 12:24:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:28:07 +0000   Mon, 20 Jan 2025 12:24:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:28:07 +0000   Mon, 20 Jan 2025 12:24:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:28:07 +0000   Mon, 20 Jan 2025 12:24:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-618033
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 4296a18b2e774369b2694f137a6719b6
	  System UUID:                71f4dabb-94e8-4097-a5f8-81f5631c4c62
	  Boot ID:                    1cf72276-e5cc-4a75-95c3-e1897ed2b9a5
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-vjbl2                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m35s
	  kube-system                 etcd-old-k8s-version-618033                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m42s
	  kube-system                 kindnet-vjzbq                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m35s
	  kube-system                 kube-apiserver-old-k8s-version-618033             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 kube-controller-manager-old-k8s-version-618033    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 kube-proxy-q2cdx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-scheduler-old-k8s-version-618033             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 metrics-server-9975d5f86-h8bg5                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m28s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-jmvh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-g46zv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  9m2s (x4 over 9m2s)  kubelet     Node old-k8s-version-618033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m2s (x3 over 9m2s)  kubelet     Node old-k8s-version-618033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m2s (x3 over 9m2s)  kubelet     Node old-k8s-version-618033 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m42s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m42s                kubelet     Node old-k8s-version-618033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s                kubelet     Node old-k8s-version-618033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m42s                kubelet     Node old-k8s-version-618033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m42s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m35s                kubelet     Node old-k8s-version-618033 status is now: NodeReady
	  Normal  Starting                 8m32s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m1s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s (x7 over 6m1s)  kubelet     Node old-k8s-version-618033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet     Node old-k8s-version-618033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x8 over 6m1s)  kubelet     Node old-k8s-version-618033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan20 11:09] hrtimer: interrupt took 29526498 ns
	
	
	==> etcd [4ec4dad53941b6fded47be2cc096131305b39caf2c470ead6e63255fef1467bf] <==
	raft2025/01/20 12:24:08 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2025/01/20 12:24:08 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2025/01/20 12:24:08 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2025-01-20 12:24:08.327246 I | etcdserver: setting up the initial cluster version to 3.4
	2025-01-20 12:24:08.328329 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-01-20 12:24:08.328527 I | etcdserver/api: enabled capabilities for version 3.4
	2025-01-20 12:24:08.328651 I | etcdserver: published {Name:old-k8s-version-618033 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2025-01-20 12:24:08.328738 I | embed: ready to serve client requests
	2025-01-20 12:24:08.332704 I | embed: serving client requests on 127.0.0.1:2379
	2025-01-20 12:24:08.335978 I | embed: ready to serve client requests
	2025-01-20 12:24:08.337372 I | embed: serving client requests on 192.168.85.2:2379
	2025-01-20 12:24:28.522800 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:24:32.303794 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:24:42.303850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:24:52.303845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:25:02.303930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:25:12.303716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:25:22.303660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:25:32.303768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:25:42.303892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:25:52.303705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:26:02.303833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:26:12.303725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:26:22.303853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:26:32.303657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [d0d87daa0a46e49f73d34545b70dc086dbe8603aa4658df1d31c5027fcc3f5d7] <==
	2025-01-20 12:29:00.704675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:29:10.704594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:29:20.704621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:29:30.704519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:29:40.704491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:29:50.704459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:30:00.704531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:30:10.704652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:30:20.704624 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:30:30.704584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:30:40.704654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:30:50.704498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:31:00.704601 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:31:10.705517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:31:20.704705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:31:30.704549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:31:40.704641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:31:50.704435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:32:00.704599 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:32:10.704553 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:32:20.704461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:32:30.704644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:32:40.704521 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:32:50.704737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-20 12:33:00.704576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:33:08 up  2:15,  0 users,  load average: 2.47, 2.11, 2.43
	Linux old-k8s-version-618033 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2927f7124581213c3255e17104499e85e7b48e03b02827e11b59726f4c2a6a10] <==
	I0120 12:24:37.623240       1 controller.go:401] Syncing nftables rules
	I0120 12:24:47.429736       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:24:47.429780       1 main.go:301] handling current node
	I0120 12:24:57.422978       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:24:57.423015       1 main.go:301] handling current node
	I0120 12:25:07.422745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:25:07.422783       1 main.go:301] handling current node
	I0120 12:25:17.431957       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:25:17.431991       1 main.go:301] handling current node
	I0120 12:25:27.430290       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:25:27.430329       1 main.go:301] handling current node
	I0120 12:25:37.423219       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:25:37.423256       1 main.go:301] handling current node
	I0120 12:25:47.428106       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:25:47.428141       1 main.go:301] handling current node
	I0120 12:25:57.425681       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:25:57.425716       1 main.go:301] handling current node
	I0120 12:26:07.422447       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:26:07.422482       1 main.go:301] handling current node
	I0120 12:26:17.433666       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:26:17.433710       1 main.go:301] handling current node
	I0120 12:26:27.425680       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:26:27.425904       1 main.go:301] handling current node
	I0120 12:26:37.422753       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:26:37.422881       1 main.go:301] handling current node
	
	
	==> kindnet [a6dfb5f612403e43ba13a223208da331414458d4bb396a2b331cd9d5b285dea3] <==
	I0120 12:30:59.823117       1 main.go:301] handling current node
	I0120 12:31:09.829647       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:31:09.829687       1 main.go:301] handling current node
	I0120 12:31:19.822575       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:31:19.822611       1 main.go:301] handling current node
	I0120 12:31:29.822716       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:31:29.822753       1 main.go:301] handling current node
	I0120 12:31:39.830802       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:31:39.830840       1 main.go:301] handling current node
	I0120 12:31:49.829699       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:31:49.829730       1 main.go:301] handling current node
	I0120 12:31:59.822729       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:31:59.822930       1 main.go:301] handling current node
	I0120 12:32:09.830714       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:32:09.830749       1 main.go:301] handling current node
	I0120 12:32:19.825664       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:32:19.825942       1 main.go:301] handling current node
	I0120 12:32:29.825671       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:32:29.825714       1 main.go:301] handling current node
	I0120 12:32:39.829641       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:32:39.829678       1 main.go:301] handling current node
	I0120 12:32:49.831452       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:32:49.831493       1 main.go:301] handling current node
	I0120 12:32:59.827987       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0120 12:32:59.828027       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5d4812f61b58d763f79f28f90292a32857c5ead871756c266ec07b47b815f95a] <==
	I0120 12:29:43.589190       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:29:43.589200       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0120 12:30:19.695497       1 handler_proxy.go:102] no RequestInfo found in the context
	E0120 12:30:19.695809       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0120 12:30:19.695892       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:30:21.203695       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:30:21.203746       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:30:21.203755       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 12:31:00.265548       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:31:00.265650       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:31:00.265661       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 12:31:41.668682       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:31:41.668718       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:31:41.668725       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0120 12:32:18.305295       1 handler_proxy.go:102] no RequestInfo found in the context
	E0120 12:32:18.305381       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0120 12:32:18.305399       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 12:32:18.957974       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:32:18.958282       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:32:18.958397       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 12:32:57.392152       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:32:57.392276       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:32:57.392306       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [6a26c537f8dc28c2c090c39163fc9cfbb7d6fa98738e7aeb6ac65701f4664f15] <==
	I0120 12:24:15.902543       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0120 12:24:15.902575       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0120 12:24:15.915163       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0120 12:24:15.918638       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0120 12:24:15.918660       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0120 12:24:16.479959       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 12:24:16.539241       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0120 12:24:16.706895       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0120 12:24:16.708205       1 controller.go:606] quota admission added evaluator for: endpoints
	I0120 12:24:16.714559       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 12:24:17.525479       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0120 12:24:17.998686       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0120 12:24:18.075745       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0120 12:24:26.508453       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0120 12:24:33.510425       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0120 12:24:33.641754       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0120 12:24:50.780452       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:24:50.780635       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:24:50.780655       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 12:25:26.182146       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:25:26.182192       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:25:26.182202       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0120 12:26:05.047876       1 client.go:360] parsed scheme: "passthrough"
	I0120 12:26:05.047935       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0120 12:26:05.047944       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [8950cdd4d5874b1c165bcdf08ac80ae871f364c4c9461402472e9b68f12ef9f2] <==
	I0120 12:24:33.554355       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0120 12:24:33.554388       1 shared_informer.go:247] Caches are synced for stateful set 
	I0120 12:24:33.554401       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0120 12:24:33.575106       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0120 12:24:33.575228       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0120 12:24:33.577860       1 shared_informer.go:247] Caches are synced for HPA 
	I0120 12:24:33.590628       1 shared_informer.go:247] Caches are synced for attach detach 
	I0120 12:24:33.661233       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-618033" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0120 12:24:33.666411       1 range_allocator.go:373] Set node old-k8s-version-618033 PodCIDR to [10.244.0.0/24]
	I0120 12:24:33.666738       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2s7v5"
	I0120 12:24:33.693686       1 shared_informer.go:247] Caches are synced for resource quota 
	I0120 12:24:33.724490       1 shared_informer.go:247] Caches are synced for resource quota 
	I0120 12:24:33.789536       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vjbl2"
	E0120 12:24:33.817744       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0120 12:24:33.865929       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q2cdx"
	I0120 12:24:33.885071       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0120 12:24:33.956113       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vjzbq"
	I0120 12:24:34.130752       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0120 12:24:34.130776       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0120 12:24:34.185272       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0120 12:24:34.768589       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0120 12:24:34.800381       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-2s7v5"
	I0120 12:24:38.501163       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0120 12:26:39.510664       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0120 12:26:39.649257       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [beff5ecb54dc9e95f374715bddab70d2f72197502772e9b08c30b8ea7b76e5d0] <==
	W0120 12:28:41.679704       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:29:09.180254       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:29:13.330150       1 request.go:655] Throttling request took 1.048486792s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0120 12:29:14.181461       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:29:39.682136       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:29:45.831865       1 request.go:655] Throttling request took 1.048361271s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0120 12:29:46.683218       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:30:10.184183       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:30:18.333687       1 request.go:655] Throttling request took 1.048352548s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 12:30:19.187184       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:30:40.696442       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:30:50.838304       1 request.go:655] Throttling request took 1.048332598s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 12:30:51.689871       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:31:11.198558       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:31:23.340305       1 request.go:655] Throttling request took 1.048373362s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 12:31:24.191866       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:31:41.700313       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:31:55.842278       1 request.go:655] Throttling request took 1.048180465s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0120 12:31:56.694255       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:32:12.202399       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:32:28.350303       1 request.go:655] Throttling request took 1.048357981s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0120 12:32:29.202179       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0120 12:32:42.704827       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0120 12:33:00.852735       1 request.go:655] Throttling request took 1.048397918s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0120 12:33:01.704319       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [3ae3ce774b5dc37e51f307daa890dcba102548608c9066532eafb1ab59a2b352] <==
	I0120 12:27:21.329773       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0120 12:27:21.329861       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0120 12:27:21.351952       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0120 12:27:21.352064       1 server_others.go:185] Using iptables Proxier.
	I0120 12:27:21.352354       1 server.go:650] Version: v1.20.0
	I0120 12:27:21.353262       1 config.go:315] Starting service config controller
	I0120 12:27:21.353410       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0120 12:27:21.353764       1 config.go:224] Starting endpoint slice config controller
	I0120 12:27:21.354015       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0120 12:27:21.454012       1 shared_informer.go:247] Caches are synced for service config 
	I0120 12:27:21.454220       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [a14330fd1aa84044e9748f80065a2029cbeb7e001226470f9bbbeefb66384f03] <==
	I0120 12:24:36.223331       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0120 12:24:36.223483       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0120 12:24:36.253817       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0120 12:24:36.253925       1 server_others.go:185] Using iptables Proxier.
	I0120 12:24:36.254169       1 server.go:650] Version: v1.20.0
	I0120 12:24:36.258240       1 config.go:315] Starting service config controller
	I0120 12:24:36.258258       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0120 12:24:36.258275       1 config.go:224] Starting endpoint slice config controller
	I0120 12:24:36.258279       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0120 12:24:36.358389       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0120 12:24:36.358473       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [758444c7d1ae55671b729fcf8be942f2fa64b6b2d9753161f0153e6dad487ff0] <==
	I0120 12:24:09.467082       1 serving.go:331] Generated self-signed cert in-memory
	W0120 12:24:15.146004       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 12:24:15.146049       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 12:24:15.146057       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 12:24:15.146062       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 12:24:15.236889       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0120 12:24:15.245116       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0120 12:24:15.245238       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 12:24:15.252877       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0120 12:24:15.251489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 12:24:15.251578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 12:24:15.251647       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 12:24:15.251773       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 12:24:15.251848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 12:24:15.251925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 12:24:15.257303       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:24:15.257827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 12:24:15.259252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 12:24:15.261549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 12:24:15.261980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 12:24:15.262931       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 12:24:16.104536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 12:24:16.418500       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0120 12:24:18.153029       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [d8f6fdcd0e3fbb033787834d4fee5c0b28c71dc407e1ebb1488741a77aadfe9e] <==
	I0120 12:27:11.768971       1 serving.go:331] Generated self-signed cert in-memory
	W0120 12:27:17.269776       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 12:27:17.269815       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 12:27:17.269828       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 12:27:17.269844       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 12:27:17.375328       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0120 12:27:17.384992       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 12:27:17.385017       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 12:27:17.385326       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0120 12:27:17.485396       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 20 12:31:38 old-k8s-version-618033 kubelet[655]: E0120 12:31:38.596565     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: I0120 12:31:40.595804     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:31:40 old-k8s-version-618033 kubelet[655]: E0120 12:31:40.596161     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:31:49 old-k8s-version-618033 kubelet[655]: E0120 12:31:49.596536     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: I0120 12:31:53.596067     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:31:53 old-k8s-version-618033 kubelet[655]: E0120 12:31:53.597121     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:32:03 old-k8s-version-618033 kubelet[655]: E0120 12:32:03.599363     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: I0120 12:32:08.595889     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:32:08 old-k8s-version-618033 kubelet[655]: E0120 12:32:08.596270     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:32:18 old-k8s-version-618033 kubelet[655]: E0120 12:32:18.596585     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: I0120 12:32:21.595945     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:32:21 old-k8s-version-618033 kubelet[655]: E0120 12:32:21.596812     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:32:31 old-k8s-version-618033 kubelet[655]: E0120 12:32:31.596590     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: I0120 12:32:34.595780     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:32:34 old-k8s-version-618033 kubelet[655]: E0120 12:32:34.596651     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:32:42 old-k8s-version-618033 kubelet[655]: E0120 12:32:42.596598     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: I0120 12:32:46.595881     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:32:46 old-k8s-version-618033 kubelet[655]: E0120 12:32:46.596276     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:32:53 old-k8s-version-618033 kubelet[655]: E0120 12:32:53.596605     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 20 12:33:00 old-k8s-version-618033 kubelet[655]: I0120 12:33:00.595784     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: b4c7a3b420fff1ae317b9d49316d91c8f42df0baef2811e040801eb7cdb8492a
	Jan 20 12:33:00 old-k8s-version-618033 kubelet[655]: E0120 12:33:00.596134     655 pod_workers.go:191] Error syncing pod 4faf39bd-9e31-4346-b8d7-f1e0d178bb59 ("dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jmvh6_kubernetes-dashboard(4faf39bd-9e31-4346-b8d7-f1e0d178bb59)"
	Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.607462     655 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.607934     655 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.608161     655 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-t7n5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-h8bg5_kube-system(67763d1
a-af35-4324-bb02-02c95b8fc186): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jan 20 12:33:05 old-k8s-version-618033 kubelet[655]: E0120 12:33:05.608344     655 pod_workers.go:191] Error syncing pod 67763d1a-af35-4324-bb02-02c95b8fc186 ("metrics-server-9975d5f86-h8bg5_kube-system(67763d1a-af35-4324-bb02-02c95b8fc186)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [d698a9d5733dfd49dff655bd5da9fb39a10fd091af87c05642acb2c77c3c8eb6] <==
	2025/01/20 12:27:43 Starting overwatch
	2025/01/20 12:27:43 Using namespace: kubernetes-dashboard
	2025/01/20 12:27:43 Using in-cluster config to connect to apiserver
	2025/01/20 12:27:43 Using secret token for csrf signing
	2025/01/20 12:27:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/20 12:27:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/20 12:27:43 Successful initial request to the apiserver, version: v1.20.0
	2025/01/20 12:27:43 Generating JWE encryption key
	2025/01/20 12:27:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/20 12:27:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/20 12:27:43 Initializing JWE encryption key from synchronized object
	2025/01/20 12:27:43 Creating in-cluster Sidecar client
	2025/01/20 12:27:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:27:43 Serving insecurely on HTTP port: 9090
	2025/01/20 12:28:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:28:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:29:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:29:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:30:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:30:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:31:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:31:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:32:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 12:32:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2dbb0b8040357b23879c80ea03d1e37945dcd58a0897ee6b2d364ba98b329b5d] <==
	I0120 12:28:04.716298       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:28:04.736296       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:28:04.736554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:28:22.218353       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:28:22.218713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-618033_8349d959-2a16-481e-8df8-01ea447732a0!
	I0120 12:28:22.221145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f52819a7-3a4d-4a3b-a66a-9681d171e973", APIVersion:"v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-618033_8349d959-2a16-481e-8df8-01ea447732a0 became leader
	I0120 12:28:22.319902       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-618033_8349d959-2a16-481e-8df8-01ea447732a0!
	
	
	==> storage-provisioner [fcc769c7e372671469870b0e67d82c86b76967112fcc077ded31b20e117af224] <==
	I0120 12:27:19.476511       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0120 12:27:49.477991       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-618033 -n old-k8s-version-618033
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-618033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-h8bg5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-618033 describe pod metrics-server-9975d5f86-h8bg5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-618033 describe pod metrics-server-9975d5f86-h8bg5: exit status 1 (114.476378ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-h8bg5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-618033 describe pod metrics-server-9975d5f86-h8bg5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7200.083s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-717328 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-717328 --alsologtostderr -v=1: (1.080712368s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-717328 -n no-preload-717328
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-717328 -n no-preload-717328: exit status 2 (501.422734ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-717328 -n no-preload-717328
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-717328 -n no-preload-717328: exit status 2 (467.755328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-717328 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-717328 --alsologtostderr -v=1: (1.242136774s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-717328 -n no-preload-717328
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-717328 -n no-preload-717328
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.48s)
E0120 12:50:11.954386  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:50:13.586010  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:50:41.464157  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:50:52.916359  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:16.100801  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.275528  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.281907  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.293406  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.314837  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.356245  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.437673  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.599251  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:20.920879  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:21.562253  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:22.843654  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:25.405086  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:28.436054  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:30.526466  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:36.847943  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:40.767795  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:51:43.803038  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:00.969953  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:01.250037  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:14.840084  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.456538  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.463194  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.474678  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.496179  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.537639  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.619121  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:39.780659  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:40.102360  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:40.744653  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:42.026132  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:42.211814  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:44.587603  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:49.710008  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:51.499809  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:57.600234  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:59.917870  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:52:59.952324  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:53:20.434476  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:53:20.774383  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:53:24.039248  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:53:25.306391  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:01.396118  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:04.134613  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:28.986776  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:28.993168  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:29.004606  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:29.026174  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:29.067610  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:29.149050  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:29.310631  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:29.632481  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:30.274612  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:30.976989  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:31.555982  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:34.117313  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:39.239412  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:45.879891  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:49.480847  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:54:58.682290  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:55:09.962225  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:55:23.317445  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:55:50.924007  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:16.101260  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:20.275287  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:28.436591  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:36.847724  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:56:47.975954  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:00.970098  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:12.845793  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:39.457349  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:57.599704  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:07.159175  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:58:20.774273  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:59:28.987120  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:59:30.977308  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:59:43.850541  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:59:45.879910  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:59:56.687451  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:01:08.947455  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:01:16.101264  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:01:20.274856  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:01:28.436551  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:01:36.847565  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:02:00.970313  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:02:39.165066  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:02:39.456745  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:02:57.600231  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:03:20.774148  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:20.668720  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:28.987101  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:30.977850  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:04:45.878466  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:05:54.044034  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:16.100788  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:20.275256  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:28.436551  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:36.847943  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:07:00.970029  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:07:39.456627  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:07:43.339578  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:07:57.600093  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:08:20.774865  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:02.520627  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:28.987186  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:30.976981  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:31.502009  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:39.919888  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:09:45.882603  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:10:04.041886  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:10:52.049695  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:11:16.101221  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:11:20.275275  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:11:28.436240  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:11:36.848073  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:12:00.970137  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:12:39.456683  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:12:57.599218  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:13:20.774842  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:14:28.987111  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:14:30.977114  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:14:45.881766  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:16:16.100821  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:16:20.275015  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:16:23.851918  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:16:28.436583  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:16:36.848254  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:17:00.969457  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:17:39.456606  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:17:48.950154  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:17:57.599256  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:18:20.774813  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:19:19.167337  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:19:28.987095  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:19:30.977207  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:19:45.881842  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:00.670823  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:16.101304  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:20.275327  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:28.436701  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:36.847949  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:22:00.969985  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:22:34.048424  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:22:39.456596  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:22:57.599702  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:23:20.774797  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:24:23.342696  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:24:28.986761  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:24:30.977625  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:24:45.882930  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:25:42.522787  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:11.503480  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:16.101083  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:19.922272  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:20.274839  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:28.436388  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:36.848052  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:26:44.045758  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:27:00.969907  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:27:32.051854  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:27:39.456757  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:27:57.599357  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:28:20.774063  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:29:28.986663  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:29:30.977199  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:29:45.881896  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:31:16.101253  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:31:20.275307  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:31:28.436597  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:31:36.847762  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:32:00.970364  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:32:39.456294  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:32:57.599471  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/calico-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:33:03.853733  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:33:20.774832  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:34:28.951929  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:34:28.987425  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/bridge-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:34:30.977178  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/custom-flannel-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:34:45.881793  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/no-preload-717328/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:35:59.169377  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:36:16.101111  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/kindnet-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:36:20.274583  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/enable-default-cni-586737/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:36:28.436072  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:36:36.847784  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:37:00.970365  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (1h15m31s)
		TestNetworkPlugins/group/auto (58m0s)
		TestNetworkPlugins/group/auto/Start (58m0s)

                                                
                                                
goroutine 3903 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x30c
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 71 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x434
testing.tRunner(0x4000028340, 0x4000a21bb8)
	/usr/local/go/src/testing/testing.go:1696 +0x120
testing.runTests(0x4000830018, {0x4cf0100, 0x2b, 0x2b}, {0x4000a21d08?, 0x11fa04?, 0x4d16520?})
	/usr/local/go/src/testing/testing.go:2166 +0x3ac
testing.(*M).Run(0x40006d34a0)
	/usr/local/go/src/testing/testing.go:2034 +0x588
k8s.io/minikube/test/integration.TestMain(0x40006d34a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x84
main.main()
	_testmain.go:131 +0x98

                                                
                                                
goroutine 3596 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x4001991f40, 0x4001455f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0xa8?, 0x4001991f40, 0x4001991f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x400187c000?, 0x40000e23c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4001559340?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3592
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 3595 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001caac90, 0x1c)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001caac80)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001caacc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001e0d300, {0x3383960, 0x4001e40870}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001e0d300, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3592
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 3338 [chan receive, 56 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001e3d680, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3336
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3469 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3465
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 864 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x40006cf380, 0x4001552310)
	/usr/local/go/src/os/exec/exec.go:798 +0x2c8
created by os/exec.(*Cmd).Start in goroutine 863
	/usr/local/go/src/os/exec/exec.go:759 +0x78c

                                                
                                                
goroutine 2729 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2725
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 3347 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3346
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 2944 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4000a31590, 0x1f)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4000a31580)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000a315c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x400147d0e0, {0x3383960, 0x4001835740}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400147d0e0, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2973
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 112 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4000a306d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4000a306c0)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4000a30700)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4000986000, {0x3383960, 0x4000876810}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000986000, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 95
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 113 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x400152df40, 0x400152df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x14?, 0x400152df40, 0x400152df88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x33844e0?, 0x40007c25f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4000a2f080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 95
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 94 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 93
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 95 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4000a30700, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 93
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 114 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 113
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3144 [select, 58 minutes]:
os/exec.(*Cmd).watchCtx(0x4001516600, 0x4001f92a10)
	/usr/local/go/src/os/exec/exec.go:773 +0x7c
created by os/exec.(*Cmd).Start in goroutine 3141
	/usr/local/go/src/os/exec/exec.go:759 +0x78c

                                                
                                                
goroutine 883 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 882
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 1301 [select, 110 minutes]:
net/http.(*persistConn).readLoop(0x4001e4d200)
	/usr/local/go/src/net/http/transport.go:2325 +0xb24
created by net/http.(*Transport).dialConn in goroutine 1299
	/usr/local/go/src/net/http/transport.go:1874 +0x1050

                                                
                                                
goroutine 2733 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001e3d1d0, 0x21)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001e3d1c0)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e3d200)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001439100, {0x3383960, 0x4001e9ce10}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001439100, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2730
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 2681 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001caa640, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2659
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 707 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0xffff565b58d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001d98000?, 0x10?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001d98000)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x24c
net.(*netFD).accept(0x4001d98000)
	/usr/local/go/src/net/fd_unix.go:172 +0x28
net.(*TCPListener).accept(0x4001caa1c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x28
net.(*TCPListener).Accept(0x4001caa1c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x2c
net/http.(*Server).Serve(0x40016fc4b0, {0x33af830, 0x4001caa1c0})
	/usr/local/go/src/net/http/server.go:3330 +0x294
net/http.(*Server).ListenAndServe(0x40016fc4b0)
	/usr/local/go/src/net/http/server.go:3259 +0x84
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x40016c2d00?, 0x40016c2d00)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2213 +0x20
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 705
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2212 +0x11c

                                                
                                                
goroutine 3473 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3456
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 884 [chan receive, 110 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001cab600, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 882
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 2095 [chan receive, 75 minutes]:
testing.(*T).Run(0x40015e21a0, {0x26a3cc1?, 0x62ccefccfc8?}, 0x4001994828)
	/usr/local/go/src/testing/testing.go:1751 +0x328
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40015e21a0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xcc
testing.tRunner(0x40015e21a0, 0x3046ef8)
	/usr/local/go/src/testing/testing.go:1690 +0xe4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x314

                                                
                                                
goroutine 3225 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001caa6d0, 0x1e)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001caa6c0)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001caa700)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001df1710, {0x3383960, 0x40018de180}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001df1710, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3222
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 3784 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001e3c610, 0x1b)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001e3c600)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e3c640)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x400155e470, {0x3383960, 0x400098bc50}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400155e470, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3781
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 1025 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x40016ef080, 0x40016d9d50)
	/usr/local/go/src/os/exec/exec.go:798 +0x2c8
created by os/exec.(*Cmd).Start in goroutine 1024
	/usr/local/go/src/os/exec/exec.go:759 +0x78c

                                                
                                                
goroutine 2977 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x400198cf40, 0x4001ad8f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x58?, 0x400198cf40, 0x400198cf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x400013f180?, 0x400013f180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x40007cbb00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2973
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 875 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x4000828f40, 0x40014aff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x70?, 0x4000828f40, 0x4000828f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x40007cb980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 884
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 874 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0x4001cab5d0, 0x2b)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001cab5c0)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001cab600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x400144d720, {0x3383960, 0x400098b1a0}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400144d720, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 884
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 3226 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x400082ff40, 0x400166ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x0?, 0x400082ff40, 0x400082ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4001f62900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3222
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 2663 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001caa610, 0x21)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001caa600)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001caa640)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001f84470, {0x3383960, 0x4001b14330}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001f84470, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2681
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 1302 [select, 110 minutes]:
net/http.(*persistConn).writeLoop(0x4001e4d200)
	/usr/local/go/src/net/http/transport.go:2519 +0x9c
created by net/http.(*Transport).dialConn in goroutine 1299
	/usr/local/go/src/net/http/transport.go:1875 +0x1098

                                                
                                                
goroutine 3663 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0x4001c765d0, 0x1b)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001c765c0)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001c76600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40015f2b80, {0x3383960, 0x4001c46240}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015f2b80, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3660
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 2680 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2659
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 3221 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3220
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 2664 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x400082c740, 0x400166cf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x14?, 0x400082c740, 0x400082c788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x40007fcd90?, 0x400006ec80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4001e34000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2681
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 1257 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x4001ae7200, 0x4001dbfea0)
	/usr/local/go/src/os/exec/exec.go:798 +0x2c8
created by os/exec.(*Cmd).Start in goroutine 817
	/usr/local/go/src/os/exec/exec.go:759 +0x78c

                                                
                                                
goroutine 3597 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3596
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3786 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3785
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 876 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 875
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3141 [syscall, 58 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x14, 0x4001ad7c30, 0x4, 0x4000123170, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x2c
os.(*Process).pidfdWait(0x4002024180)
	/usr/local/go/src/os/pidfd_linux.go:110 +0x1d8
os.(*Process).wait(0x3?)
	/usr/local/go/src/os/exec_unix.go:27 +0x2c
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0x4001516600)
	/usr/local/go/src/os/exec/exec.go:906 +0x38
os/exec.(*Cmd).Run(0x4001516600)
	/usr/local/go/src/os/exec/exec.go:610 +0x38
k8s.io/minikube/test/integration.Run(0x40007eb040, 0x4001516600)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x184
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x40007eb040)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x54
testing.tRunner(0x40007eb040, 0x4001f0a4e0)
	/usr/local/go/src/testing/testing.go:1690 +0xe4
created by testing.(*T).Run in goroutine 2415
	/usr/local/go/src/testing/testing.go:1743 +0x314

                                                
                                                
goroutine 3222 [chan receive, 56 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001caa700, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3220
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3337 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3336
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 2414 [chan receive, 75 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x434
testing.tRunner(0x40015e3d40, 0x4001994828)
	/usr/local/go/src/testing/testing.go:1696 +0x120
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1743 +0x314

                                                
                                                
goroutine 2665 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2664
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3592 [chan receive, 52 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001caacc0, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3587
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3455 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001c77650, 0x1c)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001c77640)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001c77680)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001cdfc40, {0x3383960, 0x4001d2af90}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001cdfc40, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3470
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 2973 [chan receive, 64 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4000a315c0, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2968
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3781 [chan receive, 48 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001e3c640, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3815
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3697 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3664
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3780 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3815
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 3142 [IO wait, 58 minutes]:
internal/poll.runtime_pollWait(0xffff565b5018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000a42ba0?, 0x400073abda?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000a42ba0, {0x400073abda, 0x426, 0x426})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1fc
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400203a208, {0x400073abda?, 0x400082c578?, 0x20d?})
	/usr/local/go/src/os/file.go:124 +0x70
bytes.(*Buffer).ReadFrom(0x4001f0a5a0, {0x3381ee0, 0x4001c78358})
	/usr/local/go/src/bytes/buffer.go:211 +0x90
io.copyBuffer({0x3382060, 0x4001f0a5a0}, {0x3381ee0, 0x4001c78358}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400203a208?, {0x3382060, 0x4001f0a5a0})
	/usr/local/go/src/os/file.go:275 +0x58
os.(*File).WriteTo(0x400203a208, {0x3382060, 0x4001f0a5a0})
	/usr/local/go/src/os/file.go:253 +0xa0
io.copyBuffer({0x3382060, 0x4001f0a5a0}, {0x3381f60, 0x400203a208}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x44
os/exec.(*Cmd).Start.func2(0x40007eb040?)
	/usr/local/go/src/os/exec/exec.go:733 +0x34
created by os/exec.(*Cmd).Start in goroutine 3141
	/usr/local/go/src/os/exec/exec.go:732 +0x7c0

                                                
                                                
goroutine 2734 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x4001541f40, 0x400166ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x0?, 0x4001541f40, 0x4001541f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4000a2ed80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2730
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 2730 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001e3d200, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2725
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3143 [IO wait, 58 minutes]:
internal/poll.runtime_pollWait(0xffff565b5e50, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4000a42c60?, 0x400164c1cd?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000a42c60, {0x400164c1cd, 0x7e33, 0x7e33})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1fc
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400203a220, {0x400164c1cd?, 0x400082bd78?, 0xfeab?})
	/usr/local/go/src/os/file.go:124 +0x70
bytes.(*Buffer).ReadFrom(0x4001f0a5d0, {0x3381ee0, 0x4001c78368})
	/usr/local/go/src/bytes/buffer.go:211 +0x90
io.copyBuffer({0x3382060, 0x4001f0a5d0}, {0x3381ee0, 0x4001c78368}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400203a220?, {0x3382060, 0x4001f0a5d0})
	/usr/local/go/src/os/file.go:275 +0x58
os.(*File).WriteTo(0x400203a220, {0x3382060, 0x4001f0a5d0})
	/usr/local/go/src/os/file.go:253 +0xa0
io.copyBuffer({0x3382060, 0x4001f0a5d0}, {0x3381f60, 0x400203a220}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x44
os/exec.(*Cmd).Start.func2(0x4001e34d80?)
	/usr/local/go/src/os/exec/exec.go:733 +0x34
created by os/exec.(*Cmd).Start in goroutine 3141
	/usr/local/go/src/os/exec/exec.go:732 +0x7c0

                                                
                                                
goroutine 2735 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2734
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 1092 [chan send, 110 minutes]:
os/exec.(*Cmd).watchCtx(0x40019d0300, 0x400187bea0)
	/usr/local/go/src/os/exec/exec.go:798 +0x2c8
created by os/exec.(*Cmd).Start in goroutine 1091
	/usr/local/go/src/os/exec/exec.go:759 +0x78c

                                                
                                                
goroutine 3345 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0x4001e3d650, 0x1c)
	/usr/local/go/src/runtime/sema.go:587 +0x154
sync.(*Cond).Wait(0x4001e3d640)
	/usr/local/go/src/sync/cond.go:71 +0xcc
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x33d8780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8c
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001e3d680)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x40
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x40
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x400144cfc0, {0x3383960, 0x40013ee2d0}, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0x90
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400144cfc0, 0x3b9aca00, 0x0, 0x1, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x80
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3338
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x198

                                                
                                                
goroutine 3346 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x400198c740, 0x4001ad9f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x20?, 0x400198c740, 0x400198c788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x40006081e0?, 0x40006081e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4001f69380?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3338
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 3785 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x40014a4f40, 0x4001531f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0xa8?, 0x40014a4f40, 0x40014a4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x400188a000?, 0x40005f2780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x40017b7200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3781
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 2972 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2968
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 3659 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3658
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 3591 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x33b2a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x258
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3587
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x200

                                                
                                                
goroutine 2415 [chan receive, 58 minutes]:
testing.(*T).Run(0x40000284e0, {0x26a3cc6?, 0x3379db8?}, 0x4001f0a4e0)
	/usr/local/go/src/testing/testing.go:1751 +0x328
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40000284e0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x55c
testing.tRunner(0x40000284e0, 0x400067e300)
	/usr/local/go/src/testing/testing.go:1690 +0xe4
created by testing.(*T).Run in goroutine 2414
	/usr/local/go/src/testing/testing.go:1743 +0x314

                                                
                                                
goroutine 3227 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3226
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3664 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x40014a5f40, 0x400154ff88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x0?, 0x40014a5f40, 0x40014a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4000a42c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3660
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 3456 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x33bc790, 0x40000800e0}, 0x400082c740, 0x4001459f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xb0
k8s.io/apimachinery/pkg/util/wait.poll({0x33bc790, 0x40000800e0}, 0x0?, 0x400082c740, 0x400082c788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x90
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x33bc790?, 0x40000800e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x44
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x8ddc4?, 0x4000a431a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x40
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3470
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x23c

                                                
                                                
goroutine 2978 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x150
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2977
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xc0

                                                
                                                
goroutine 3470 [chan receive, 54 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001c77680, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3465
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                                
goroutine 3660 [chan receive, 50 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0x4001c76600, 0x40000800e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x248
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3658
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x48c

                                                
                                    

Test pass (253/282)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 4.8
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.08
18 TestDownloadOnly/v1.32.0/DeleteAll 0.23
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 235.49
29 TestAddons/serial/Volcano 40.99
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.89
35 TestAddons/parallel/Registry 16.54
36 TestAddons/parallel/Ingress 19.58
37 TestAddons/parallel/InspektorGadget 11.87
38 TestAddons/parallel/MetricsServer 6.91
40 TestAddons/parallel/CSI 59.83
41 TestAddons/parallel/Headlamp 17.08
42 TestAddons/parallel/CloudSpanner 5.66
43 TestAddons/parallel/LocalPath 9.61
44 TestAddons/parallel/NvidiaDevicePlugin 5.64
45 TestAddons/parallel/Yakd 10.94
47 TestAddons/StoppedEnableDisable 12.26
48 TestCertOptions 33.05
49 TestCertExpiration 226.55
51 TestForceSystemdFlag 42.57
52 TestForceSystemdEnv 43.97
53 TestDockerEnvContainerd 46.18
58 TestErrorSpam/setup 29.94
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.25
61 TestErrorSpam/pause 1.77
62 TestErrorSpam/unpause 1.73
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 77.7
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.25
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
75 TestFunctional/serial/CacheCmd/cache/add_local 1.33
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 43.78
84 TestFunctional/serial/ComponentHealth 0.13
85 TestFunctional/serial/LogsCmd 1.71
86 TestFunctional/serial/LogsFileCmd 1.78
87 TestFunctional/serial/InvalidService 4.37
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 9.28
91 TestFunctional/parallel/DryRun 0.65
92 TestFunctional/parallel/InternationalLanguage 0.28
93 TestFunctional/parallel/StatusCmd 1.31
97 TestFunctional/parallel/ServiceCmdConnect 9.62
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 27.65
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.16
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.65
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.87
113 TestFunctional/parallel/License 0.27
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
116 TestFunctional/parallel/Version/short 0.09
117 TestFunctional/parallel/Version/components 1.41
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
126 TestFunctional/parallel/ImageCommands/Setup 0.66
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.5
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 8.18
144 TestFunctional/parallel/MountCmd/specific-port 2.21
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.36
146 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
147 TestFunctional/parallel/ServiceCmd/List 0.6
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.71
150 TestFunctional/parallel/ProfileCmd/profile_list 0.67
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.52
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 135.38
162 TestMultiControlPlane/serial/DeployApp 33.46
163 TestMultiControlPlane/serial/PingHostFromPods 1.67
164 TestMultiControlPlane/serial/AddWorkerNode 23.15
165 TestMultiControlPlane/serial/NodeLabels 0.11
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.99
167 TestMultiControlPlane/serial/CopyFile 19.65
168 TestMultiControlPlane/serial/StopSecondaryNode 12.79
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
170 TestMultiControlPlane/serial/RestartSecondaryNode 18.59
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 147.18
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.49
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.99
175 TestMultiControlPlane/serial/StopCluster 35.87
176 TestMultiControlPlane/serial/RestartCluster 64.15
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
178 TestMultiControlPlane/serial/AddSecondaryNode 44.32
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
183 TestJSONOutput/start/Command 79.1
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.88
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.26
208 TestKicCustomNetwork/create_custom_network 39.49
209 TestKicCustomNetwork/use_default_bridge_network 33.02
210 TestKicExistingNetwork 35.31
211 TestKicCustomSubnet 34.16
212 TestKicStaticIP 33.27
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 66.14
217 TestMountStart/serial/StartWithMountFirst 8.76
218 TestMountStart/serial/VerifyMountFirst 0.25
219 TestMountStart/serial/StartWithMountSecond 6.42
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.12
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 103.2
229 TestMultiNode/serial/DeployApp2Nodes 19.88
230 TestMultiNode/serial/PingHostFrom2Pods 1.03
231 TestMultiNode/serial/AddNode 18.61
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.68
234 TestMultiNode/serial/CopyFile 10.24
235 TestMultiNode/serial/StopNode 2.25
236 TestMultiNode/serial/StartAfterStop 9.34
237 TestMultiNode/serial/RestartKeepsNodes 85.68
238 TestMultiNode/serial/DeleteNode 5.25
239 TestMultiNode/serial/StopMultiNode 23.93
240 TestMultiNode/serial/RestartMultiNode 49.59
241 TestMultiNode/serial/ValidateNameConflict 35.49
246 TestPreload 115.83
248 TestScheduledStopUnix 105.74
251 TestInsufficientStorage 10.91
252 TestRunningBinaryUpgrade 87.13
254 TestKubernetesUpgrade 346.54
255 TestMissingContainerUpgrade 166.78
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 39.32
259 TestNoKubernetes/serial/StartWithStopK8s 17.15
260 TestNoKubernetes/serial/Start 6.39
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
262 TestNoKubernetes/serial/ProfileList 0.96
263 TestNoKubernetes/serial/Stop 1.22
264 TestNoKubernetes/serial/StartNoArgs 7.47
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
266 TestStoppedBinaryUpgrade/Setup 0.67
267 TestStoppedBinaryUpgrade/Upgrade 101.64
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
277 TestPause/serial/Start 65.15
278 TestPause/serial/SecondStartNoReconfiguration 7.93
279 TestPause/serial/Pause 0.95
280 TestPause/serial/VerifyStatus 0.41
281 TestPause/serial/Unpause 0.88
282 TestPause/serial/PauseAgain 1.05
283 TestPause/serial/DeletePaused 2.85
284 TestPause/serial/VerifyDeletedResources 0.46
297 TestStartStop/group/old-k8s-version/serial/FirstStart 177.28
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.9
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.72
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
302 TestStartStop/group/old-k8s-version/serial/Stop 12.33
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.55
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.75
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 276.98
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.17
315 TestStartStop/group/embed-certs/serial/FirstStart 54.86
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/embed-certs/serial/DeployApp 9.42
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
320 TestStartStop/group/old-k8s-version/serial/Pause 3.67
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.4
322 TestStartStop/group/embed-certs/serial/Stop 12.29
324 TestStartStop/group/no-preload/serial/FirstStart 76.86
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
326 TestStartStop/group/embed-certs/serial/SecondStart 270.82
327 TestStartStop/group/no-preload/serial/DeployApp 9.38
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
329 TestStartStop/group/no-preload/serial/Stop 12
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/no-preload/serial/SecondStart 269.13
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.2
337 TestStartStop/group/newest-cni/serial/FirstStart 35.8
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
340 TestStartStop/group/newest-cni/serial/Stop 1.27
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
342 TestStartStop/group/newest-cni/serial/SecondStart 16.64
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
346 TestStartStop/group/newest-cni/serial/Pause 3.65
348 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
350 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
x
+
TestDownloadOnly/v1.20.0/json-events (6.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-359921 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-359921 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.568517985s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 11:37:34.024523  451835 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 11:37:34.024617  451835 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-359921
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-359921: exit status 85 (98.585061ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-359921 | jenkins | v1.35.0 | 20 Jan 25 11:37 UTC |          |
	|         | -p download-only-359921        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:37:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:37:27.506300  451840 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:37:27.506486  451840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:37:27.506518  451840 out.go:358] Setting ErrFile to fd 2...
	I0120 11:37:27.506541  451840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:37:27.506824  451840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	W0120 11:37:27.506990  451840 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20151-446459/.minikube/config/config.json: open /home/jenkins/minikube-integration/20151-446459/.minikube/config/config.json: no such file or directory
	I0120 11:37:27.507428  451840 out.go:352] Setting JSON to true
	I0120 11:37:27.508337  451840 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4793,"bootTime":1737368255,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 11:37:27.508437  451840 start.go:139] virtualization:  
	I0120 11:37:27.512773  451840 out.go:97] [download-only-359921] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0120 11:37:27.512969  451840 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 11:37:27.513087  451840 notify.go:220] Checking for updates...
	I0120 11:37:27.516589  451840 out.go:169] MINIKUBE_LOCATION=20151
	I0120 11:37:27.519769  451840 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:37:27.522607  451840 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 11:37:27.525501  451840 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	I0120 11:37:27.528412  451840 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 11:37:27.534116  451840 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 11:37:27.534402  451840 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:37:27.559214  451840 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 11:37:27.559342  451840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:37:27.610037  451840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 11:37:27.60171542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:37:27.610144  451840 docker.go:318] overlay module found
	I0120 11:37:27.613058  451840 out.go:97] Using the docker driver based on user configuration
	I0120 11:37:27.613085  451840 start.go:297] selected driver: docker
	I0120 11:37:27.613091  451840 start.go:901] validating driver "docker" against <nil>
	I0120 11:37:27.613195  451840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:37:27.662917  451840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 11:37:27.654464618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:37:27.663129  451840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:37:27.663419  451840 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 11:37:27.663580  451840 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 11:37:27.666652  451840 out.go:169] Using Docker driver with root privileges
	I0120 11:37:27.669391  451840 cni.go:84] Creating CNI manager for ""
	I0120 11:37:27.669447  451840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 11:37:27.669460  451840 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 11:37:27.669541  451840 start.go:340] cluster config:
	{Name:download-only-359921 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-359921 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:37:27.672502  451840 out.go:97] Starting "download-only-359921" primary control-plane node in "download-only-359921" cluster
	I0120 11:37:27.672522  451840 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 11:37:27.675372  451840 out.go:97] Pulling base image v0.0.46 ...
	I0120 11:37:27.675397  451840 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 11:37:27.675543  451840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 11:37:27.690705  451840 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 11:37:27.691401  451840 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 11:37:27.691507  451840 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 11:37:27.737167  451840 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 11:37:27.737203  451840 cache.go:56] Caching tarball of preloaded images
	I0120 11:37:27.737386  451840 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 11:37:27.740655  451840 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 11:37:27.740677  451840 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 11:37:27.826383  451840 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0120 11:37:32.064475  451840 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 11:37:32.064673  451840 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0120 11:37:32.332982  451840 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-359921 host does not exist
	  To start a cluster, run: "minikube start -p download-only-359921"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-359921
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (4.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-842103 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-842103 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.799603569s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (4.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 11:37:39.293840  451835 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 11:37:39.293880  451835 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20151-446459/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-842103
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-842103: exit status 85 (81.778679ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-359921 | jenkins | v1.35.0 | 20 Jan 25 11:37 UTC |                     |
	|         | -p download-only-359921        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 11:37 UTC | 20 Jan 25 11:37 UTC |
	| delete  | -p download-only-359921        | download-only-359921 | jenkins | v1.35.0 | 20 Jan 25 11:37 UTC | 20 Jan 25 11:37 UTC |
	| start   | -o=json --download-only        | download-only-842103 | jenkins | v1.35.0 | 20 Jan 25 11:37 UTC |                     |
	|         | -p download-only-842103        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 11:37:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 11:37:34.542793  452048 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:37:34.542929  452048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:37:34.542947  452048 out.go:358] Setting ErrFile to fd 2...
	I0120 11:37:34.542952  452048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:37:34.543287  452048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 11:37:34.544233  452048 out.go:352] Setting JSON to true
	I0120 11:37:34.545104  452048 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4800,"bootTime":1737368255,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 11:37:34.545181  452048 start.go:139] virtualization:  
	I0120 11:37:34.548735  452048 out.go:97] [download-only-842103] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 11:37:34.549031  452048 notify.go:220] Checking for updates...
	I0120 11:37:34.552077  452048 out.go:169] MINIKUBE_LOCATION=20151
	I0120 11:37:34.555108  452048 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:37:34.558086  452048 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 11:37:34.560925  452048 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	I0120 11:37:34.563756  452048 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0120 11:37:34.569423  452048 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 11:37:34.569817  452048 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:37:34.601548  452048 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 11:37:34.601791  452048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:37:34.656849  452048 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 11:37:34.64783285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:37:34.656962  452048 docker.go:318] overlay module found
	I0120 11:37:34.659902  452048 out.go:97] Using the docker driver based on user configuration
	I0120 11:37:34.659945  452048 start.go:297] selected driver: docker
	I0120 11:37:34.659957  452048 start.go:901] validating driver "docker" against <nil>
	I0120 11:37:34.660068  452048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:37:34.709952  452048 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 11:37:34.701334975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:37:34.710167  452048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 11:37:34.710494  452048 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0120 11:37:34.710646  452048 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 11:37:34.713761  452048 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-842103 host does not exist
	  To start a cluster, run: "minikube start -p download-only-842103"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-842103
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 11:37:40.646225  451835 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-808490 --alsologtostderr --binary-mirror http://127.0.0.1:42941 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-808490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-808490
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-272194
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-272194: exit status 85 (79.511472ms)

                                                
                                                
-- stdout --
	* Profile "addons-272194" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-272194"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-272194
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-272194: exit status 85 (74.163536ms)

                                                
                                                
-- stdout --
	* Profile "addons-272194" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-272194"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (235.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-272194 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-272194 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m55.488332327s)
--- PASS: TestAddons/Setup (235.49s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.99s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 64.99643ms
addons_test.go:823: volcano-controller stabilized in 66.096076ms
addons_test.go:815: volcano-admission stabilized in 66.804591ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-bh4hn" [23cc17b8-c4b2-469c-9d2a-ab09e9d8a092] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003420202s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-9ch5l" [0085503a-2dd4-4e1a-96bb-1d333a22cb53] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004045668s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-p7fsr" [52471c9e-f546-467d-a8f5-2844e1600669] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00361502s
addons_test.go:842: (dbg) Run:  kubectl --context addons-272194 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-272194 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-272194 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [48c8401b-939a-42ba-b554-c670a96f8b2a] Pending
helpers_test.go:344: "test-job-nginx-0" [48c8401b-939a-42ba-b554-c670a96f8b2a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [48c8401b-939a-42ba-b554-c670a96f8b2a] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003304568s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable volcano --alsologtostderr -v=1: (11.305531229s)
--- PASS: TestAddons/serial/Volcano (40.99s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-272194 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-272194 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-272194 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-272194 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [754ea849-9877-473b-a2fb-04a9163b7d65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [754ea849-9877-473b-a2fb-04a9163b7d65] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004014388s
addons_test.go:633: (dbg) Run:  kubectl --context addons-272194 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-272194 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-272194 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-272194 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.733377ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-4vc42" [685010f2-ef11-43b0-a18b-178cd1e87d00] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004276864s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-52v4j" [f063c01f-0183-42c1-9a5a-248778a35acd] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004291682s
addons_test.go:331: (dbg) Run:  kubectl --context addons-272194 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-272194 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-272194 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.547848244s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 ip
2025/01/20 11:42:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-272194 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-272194 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-272194 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [02fd14dc-05ac-46d5-a494-5e2dd78d7f23] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [02fd14dc-05ac-46d5-a494-5e2dd78d7f23] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.006139927s
I0120 11:43:36.770396  451835 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-272194 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable ingress-dns --alsologtostderr -v=1: (1.892502911s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable ingress --alsologtostderr -v=1: (7.894950336s)
--- PASS: TestAddons/parallel/Ingress (19.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4bzlw" [2d3553dc-39a3-4584-ac91-5b55c369ca3d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004787851s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable inspektor-gadget --alsologtostderr -v=1: (5.867162522s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.940856ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-zt4h8" [9b54bcdf-ae8b-4736-945b-fd8a95f3eeea] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003994436s
addons_test.go:402: (dbg) Run:  kubectl --context addons-272194 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 11:43:10.152134  451835 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 11:43:10.158145  451835 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 11:43:10.158182  451835 kapi.go:107] duration metric: took 8.887664ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.904025ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-272194 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-272194 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ed68486d-7a5c-4dc5-98ab-40fa1054ce5c] Pending
helpers_test.go:344: "task-pv-pod" [ed68486d-7a5c-4dc5-98ab-40fa1054ce5c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ed68486d-7a5c-4dc5-98ab-40fa1054ce5c] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003611401s
addons_test.go:511: (dbg) Run:  kubectl --context addons-272194 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-272194 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-272194 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-272194 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-272194 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-272194 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-272194 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [29e64041-06a0-4f41-9bff-bf8fffbd9fdb] Pending
helpers_test.go:344: "task-pv-pod-restore" [29e64041-06a0-4f41-9bff-bf8fffbd9fdb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [29e64041-06a0-4f41-9bff-bf8fffbd9fdb] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004432597s
addons_test.go:553: (dbg) Run:  kubectl --context addons-272194 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-272194 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-272194 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.78711588s)
--- PASS: TestAddons/parallel/CSI (59.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-272194 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-272194 --alsologtostderr -v=1: (1.13024761s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-x7765" [9ac09b10-8db5-4278-ab1a-50877453c9b0] Pending
helpers_test.go:344: "headlamp-69d78d796f-x7765" [9ac09b10-8db5-4278-ab1a-50877453c9b0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-x7765" [9ac09b10-8db5-4278-ab1a-50877453c9b0] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003693474s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable headlamp --alsologtostderr -v=1: (5.940154421s)
--- PASS: TestAddons/parallel/Headlamp (17.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-dkcqk" [5aa8bd1c-bffc-4c1b-9ee5-0954bf1488b5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00469713s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-272194 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-272194 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-272194 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [708b04b2-92d2-4dde-a83f-085bc5894b30] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [708b04b2-92d2-4dde-a83f-085bc5894b30] Running
helpers_test.go:344: "test-local-path" [708b04b2-92d2-4dde-a83f-085bc5894b30] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [708b04b2-92d2-4dde-a83f-085bc5894b30] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003855903s
addons_test.go:906: (dbg) Run:  kubectl --context addons-272194 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 ssh "cat /opt/local-path-provisioner/pvc-c5291294-9d86-4ff2-941a-ffa12ed01d0c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-272194 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-272194 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xq22b" [dfc183a8-1c5a-4c21-89cf-0263725ff5f8] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.013541176s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-c7gvl" [0aff228a-ceaf-409b-948f-3b676d2ef8da] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004476459s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-272194 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-272194 addons disable yakd --alsologtostderr -v=1: (5.929577804s)
--- PASS: TestAddons/parallel/Yakd (10.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-272194
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-272194: (11.97486619s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-272194
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-272194
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-272194
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (33.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-753716 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-753716 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.37230415s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-753716 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-753716 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-753716 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-753716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-753716
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-753716: (2.031135902s)
--- PASS: TestCertOptions (33.05s)

                                                
                                    
x
+
TestCertExpiration (226.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-152963 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-152963 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.218273182s)
E0120 12:23:20.776541  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-152963 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-152963 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.862250223s)
helpers_test.go:175: Cleaning up "cert-expiration-152963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-152963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-152963: (2.464469835s)
--- PASS: TestCertExpiration (226.55s)

                                                
                                    
x
+
TestForceSystemdFlag (42.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-296796 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-296796 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.053769355s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-296796 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-296796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-296796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-296796: (3.041051486s)
--- PASS: TestForceSystemdFlag (42.57s)

                                                
                                    
x
+
TestForceSystemdEnv (43.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-236901 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-236901 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.049619034s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-236901 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-236901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-236901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-236901: (2.468846637s)
--- PASS: TestForceSystemdEnv (43.97s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.18s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-900037 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-900037 --driver=docker  --container-runtime=containerd: (30.601704182s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-900037"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xBUKhUXUllaO/agent.473374" SSH_AGENT_PID="473375" DOCKER_HOST=ssh://docker@127.0.0.1:33172 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xBUKhUXUllaO/agent.473374" SSH_AGENT_PID="473375" DOCKER_HOST=ssh://docker@127.0.0.1:33172 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xBUKhUXUllaO/agent.473374" SSH_AGENT_PID="473375" DOCKER_HOST=ssh://docker@127.0.0.1:33172 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.150655679s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xBUKhUXUllaO/agent.473374" SSH_AGENT_PID="473375" DOCKER_HOST=ssh://docker@127.0.0.1:33172 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-900037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-900037
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-900037: (1.976941674s)
--- PASS: TestDockerEnvContainerd (46.18s)

                                                
                                    
x
+
TestErrorSpam/setup (29.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-250780 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-250780 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-250780 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-250780 --driver=docker  --container-runtime=containerd: (29.935261884s)
--- PASS: TestErrorSpam/setup (29.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 stop: (1.291532211s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-250780 --log_dir /tmp/nospam-250780 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20151-446459/.minikube/files/etc/test/nested/copy/451835/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805923 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0120 11:46:36.848564  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:36.855321  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:36.866774  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:36.888219  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:36.929676  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:37.011090  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:37.172581  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:37.494341  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:38.135831  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:39.417431  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:41.979918  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:47.102216  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:46:57.344393  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-805923 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m17.703554153s)
--- PASS: TestFunctional/serial/StartWithProxy (77.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 11:47:13.320010  451835 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805923 --alsologtostderr -v=8
E0120 11:47:17.826589  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-805923 --alsologtostderr -v=8: (6.244777976s)
functional_test.go:663: soft start took 6.248459033s for "functional-805923" cluster.
I0120 11:47:19.565104  451835 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-805923 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 cache add registry.k8s.io/pause:3.1: (1.514750548s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 cache add registry.k8s.io/pause:3.3: (1.381610811s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 cache add registry.k8s.io/pause:latest: (1.234720279s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-805923 /tmp/TestFunctionalserialCacheCmdcacheadd_local1735002359/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cache add minikube-local-cache-test:functional-805923
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cache delete minikube-local-cache-test:functional-805923
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-805923
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.611269ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 cache reload: (1.104645271s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 kubectl -- --context functional-805923 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-805923 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805923 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0120 11:47:58.787922  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-805923 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.777164372s)
functional_test.go:761: restart took 43.777274493s for "functional-805923" cluster.
I0120 11:48:11.855026  451835 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (43.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-805923 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 logs: (1.713438428s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 logs --file /tmp/TestFunctionalserialLogsFileCmd2072616533/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 logs --file /tmp/TestFunctionalserialLogsFileCmd2072616533/001/logs.txt: (1.775150531s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-805923 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-805923
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-805923: exit status 115 (441.268149ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31904 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-805923 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 config get cpus: exit status 14 (65.333615ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 config get cpus: exit status 14 (122.462219ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-805923 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-805923 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 490518: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-805923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (297.003876ms)

                                                
                                                
-- stdout --
	* [functional-805923] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:49:01.165330  489786 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:49:01.165594  489786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:49:01.165625  489786 out.go:358] Setting ErrFile to fd 2...
	I0120 11:49:01.165663  489786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:49:01.166007  489786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 11:49:01.166526  489786 out.go:352] Setting JSON to false
	I0120 11:49:01.167978  489786 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5487,"bootTime":1737368255,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 11:49:01.168136  489786 start.go:139] virtualization:  
	I0120 11:49:01.173896  489786 out.go:177] * [functional-805923] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0120 11:49:01.177812  489786 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:49:01.177922  489786 notify.go:220] Checking for updates...
	I0120 11:49:01.183944  489786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:49:01.186853  489786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 11:49:01.189902  489786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	I0120 11:49:01.192668  489786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 11:49:01.195569  489786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:49:01.199070  489786 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:49:01.199659  489786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:49:01.245734  489786 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 11:49:01.245868  489786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:49:01.353431  489786 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 11:49:01.338119763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:49:01.353541  489786 docker.go:318] overlay module found
	I0120 11:49:01.356671  489786 out.go:177] * Using the docker driver based on existing profile
	I0120 11:49:01.359599  489786 start.go:297] selected driver: docker
	I0120 11:49:01.359618  489786 start.go:901] validating driver "docker" against &{Name:functional-805923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-805923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:49:01.360063  489786 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:49:01.363868  489786 out.go:201] 
	W0120 11:49:01.366956  489786 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 11:49:01.369781  489786 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805923 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-805923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-805923 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (281.83272ms)

                                                
                                                
-- stdout --
	* [functional-805923] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:49:02.218142  490106 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:49:02.218424  490106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:49:02.218437  490106 out.go:358] Setting ErrFile to fd 2...
	I0120 11:49:02.218444  490106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:49:02.219386  490106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 11:49:02.219965  490106 out.go:352] Setting JSON to false
	I0120 11:49:02.221215  490106 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5488,"bootTime":1737368255,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0120 11:49:02.221297  490106 start.go:139] virtualization:  
	I0120 11:49:02.227420  490106 out.go:177] * [functional-805923] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0120 11:49:02.231344  490106 out.go:177]   - MINIKUBE_LOCATION=20151
	I0120 11:49:02.231554  490106 notify.go:220] Checking for updates...
	I0120 11:49:02.237432  490106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 11:49:02.240598  490106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	I0120 11:49:02.243481  490106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	I0120 11:49:02.247893  490106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0120 11:49:02.250915  490106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 11:49:02.254295  490106 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:49:02.254805  490106 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 11:49:02.297567  490106 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 11:49:02.297848  490106 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:49:02.378322  490106 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-20 11:49:02.366759373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:49:02.378442  490106 docker.go:318] overlay module found
	I0120 11:49:02.381557  490106 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0120 11:49:02.384444  490106 start.go:297] selected driver: docker
	I0120 11:49:02.384469  490106 start.go:901] validating driver "docker" against &{Name:functional-805923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-805923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 11:49:02.384579  490106 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 11:49:02.387923  490106 out.go:201] 
	W0120 11:49:02.390732  490106 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 11:49:02.393639  490106 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-805923 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-805923 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-mz4xm" [3572e80a-66af-427c-8c2b-031688652d50] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-mz4xm" [3572e80a-66af-427c-8c2b-031688652d50] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005060864s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32395
functional_test.go:1675: http://192.168.49.2:32395: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-mz4xm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32395
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1273a5b1-4430-44ff-ba3d-ab191cf857ae] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004149889s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-805923 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-805923 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-805923 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-805923 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ccadf803-7ec3-49da-a85f-8b35ad1cfb61] Pending
helpers_test.go:344: "sp-pod" [ccadf803-7ec3-49da-a85f-8b35ad1cfb61] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ccadf803-7ec3-49da-a85f-8b35ad1cfb61] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003674227s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-805923 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-805923 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-805923 delete -f testdata/storage-provisioner/pod.yaml: (1.376610664s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-805923 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c8d50664-036a-4f43-b4b4-047d073b2514] Pending
helpers_test.go:344: "sp-pod" [c8d50664-036a-4f43-b4b4-047d073b2514] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003411673s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-805923 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh -n functional-805923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cp functional-805923:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd315452255/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh -n functional-805923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh -n functional-805923 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/451835/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /etc/test/nested/copy/451835/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/451835.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /etc/ssl/certs/451835.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/451835.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /usr/share/ca-certificates/451835.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4518352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /etc/ssl/certs/4518352.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4518352.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /usr/share/ca-certificates/4518352.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-805923 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh "sudo systemctl is-active docker": exit status 1 (391.535956ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh "sudo systemctl is-active crio": exit status 1 (479.64968ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-805923 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-805923 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-805923 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 485228: os: process already finished
helpers_test.go:502: unable to terminate pid 485033: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-805923 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 version -o=json --components: (1.407060533s)
--- PASS: TestFunctional/parallel/Version/components (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-805923 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-805923 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [085dacf6-ec83-4aec-8f96-f1cf7996ed1b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [085dacf6-ec83-4aec-8f96-f1cf7996ed1b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.011129103s
I0120 11:48:29.220065  451835 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805923 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-805923
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-805923
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805923 image ls --format short --alsologtostderr:
I0120 11:49:04.609094  490673 out.go:345] Setting OutFile to fd 1 ...
I0120 11:49:04.609287  490673 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:04.609294  490673 out.go:358] Setting ErrFile to fd 2...
I0120 11:49:04.609299  490673 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:04.612715  490673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 11:49:04.613413  490673 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:04.613526  490673 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:04.614064  490673 cli_runner.go:164] Run: docker container inspect functional-805923 --format={{.State.Status}}
I0120 11:49:04.637945  490673 ssh_runner.go:195] Run: systemctl --version
I0120 11:49:04.638007  490673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805923
I0120 11:49:04.662232  490673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/functional-805923/id_rsa Username:docker}
I0120 11:49:04.750617  490673 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805923 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:f9d642 | 21.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-scheduler              | v1.32.0            | sha256:c3ff26 | 18.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:781d90 | 68.5MB |
| registry.k8s.io/kube-apiserver              | v1.32.0            | sha256:2b5bd0 | 26.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-805923  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-805923  | sha256:9a898e | 991B   |
| localhost/my-image                          | functional-805923  | sha256:df1663 | 831kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-controller-manager     | v1.32.0            | sha256:a8d049 | 24MB   |
| registry.k8s.io/kube-proxy                  | v1.32.0            | sha256:2f5038 | 27.4MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805923 image ls --format table --alsologtostderr:
I0120 11:49:09.391254  491076 out.go:345] Setting OutFile to fd 1 ...
I0120 11:49:09.391472  491076 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:09.391494  491076 out.go:358] Setting ErrFile to fd 2...
I0120 11:49:09.391515  491076 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:09.391768  491076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 11:49:09.392446  491076 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:09.392586  491076 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:09.393067  491076 cli_runner.go:164] Run: docker container inspect functional-805923 --format={{.State.Status}}
I0120 11:49:09.419301  491076 ssh_runner.go:195] Run: systemctl --version
I0120 11:49:09.419360  491076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805923
I0120 11:49:09.437189  491076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/functional-805923/id_rsa Username:docker}
I0120 11:49:09.526787  491076 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805923 image ls --format json --alsologtostderr:
[{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"23964889"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id"
:"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21565101"},{"id":"sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"68507108"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}
,{"id":"sha256:2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"26213662"},{"id":"sha256:2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"27362084"},{"id":"sha256:c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"18922208"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c54
8f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-805923"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"re
poTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:9a898e010bc6c58b21778eab380834e53f483adc28e91be2e683222776fb6b1d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-805923"],"size":"991"},{"id":"sha256:df166357d58155b27a236597df954b445b81890cac77c6e1ae21bcbe2012a439","repoDigests":[],"repoTags":["localhost/my-image:functional-805923"],"size":"830618"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805923 image ls --format json --alsologtostderr:
I0120 11:49:09.104160  491044 out.go:345] Setting OutFile to fd 1 ...
I0120 11:49:09.104391  491044 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:09.104420  491044 out.go:358] Setting ErrFile to fd 2...
I0120 11:49:09.104439  491044 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:09.104768  491044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 11:49:09.105529  491044 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:09.105850  491044 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:09.106466  491044 cli_runner.go:164] Run: docker container inspect functional-805923 --format={{.State.Status}}
I0120 11:49:09.134836  491044 ssh_runner.go:195] Run: systemctl --version
I0120 11:49:09.134898  491044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805923
I0120 11:49:09.161559  491044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/functional-805923/id_rsa Username:docker}
I0120 11:49:09.262070  491044 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-805923 image ls --format yaml --alsologtostderr:
- id: sha256:9a898e010bc6c58b21778eab380834e53f483adc28e91be2e683222776fb6b1d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-805923
size: "991"
- id: sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "68507108"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:a8d049396f6b8f19df1e3f6b132cb1b9696806ddf19808f97305dd16fce9450c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "23964889"
- id: sha256:c3ff26fb59f37b5910877d6e3de46aa6b020e586bdf2b441ab5f53b6f0a1797d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "18922208"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-805923
size: "2173567"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:2b5bd0f16085ac8a7260c30946f3668948a0bb88ac0b9cad635940e3dbef16dc
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "26213662"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f50386e20bfdb3f3b38672c585959554196426c66cc1905e7e7115c47cc2e67
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "27362084"
- id: sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "21565101"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805923 image ls --format yaml --alsologtostderr:
I0120 11:49:04.858999  490720 out.go:345] Setting OutFile to fd 1 ...
I0120 11:49:04.859152  490720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:04.859162  490720 out.go:358] Setting ErrFile to fd 2...
I0120 11:49:04.859168  490720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:04.859431  490720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 11:49:04.861735  490720 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:04.861924  490720 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:04.862486  490720 cli_runner.go:164] Run: docker container inspect functional-805923 --format={{.State.Status}}
I0120 11:49:04.893361  490720 ssh_runner.go:195] Run: systemctl --version
I0120 11:49:04.893444  490720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805923
I0120 11:49:04.914196  490720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/functional-805923/id_rsa Username:docker}
I0120 11:49:05.010233  490720 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh pgrep buildkitd: exit status 1 (289.974378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image build -t localhost/my-image:functional-805923 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 image build -t localhost/my-image:functional-805923 testdata/build --alsologtostderr: (3.402707436s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-805923 image build -t localhost/my-image:functional-805923 testdata/build --alsologtostderr:
I0120 11:49:05.415070  490812 out.go:345] Setting OutFile to fd 1 ...
I0120 11:49:05.416878  490812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:05.416895  490812 out.go:358] Setting ErrFile to fd 2...
I0120 11:49:05.416902  490812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 11:49:05.417263  490812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
I0120 11:49:05.418341  490812 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:05.420517  490812 config.go:182] Loaded profile config "functional-805923": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 11:49:05.421354  490812 cli_runner.go:164] Run: docker container inspect functional-805923 --format={{.State.Status}}
I0120 11:49:05.439899  490812 ssh_runner.go:195] Run: systemctl --version
I0120 11:49:05.439956  490812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-805923
I0120 11:49:05.458089  490812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/functional-805923/id_rsa Username:docker}
I0120 11:49:05.550998  490812 build_images.go:161] Building image from path: /tmp/build.2159911217.tar
I0120 11:49:05.551072  490812 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 11:49:05.560923  490812 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2159911217.tar
I0120 11:49:05.566681  490812 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2159911217.tar: stat -c "%s %y" /var/lib/minikube/build/build.2159911217.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2159911217.tar': No such file or directory
I0120 11:49:05.566709  490812 ssh_runner.go:362] scp /tmp/build.2159911217.tar --> /var/lib/minikube/build/build.2159911217.tar (3072 bytes)
I0120 11:49:05.593717  490812 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2159911217
I0120 11:49:05.603399  490812 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2159911217 -xf /var/lib/minikube/build/build.2159911217.tar
I0120 11:49:05.613312  490812 containerd.go:394] Building image: /var/lib/minikube/build/build.2159911217
I0120 11:49:05.613421  490812 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2159911217 --local dockerfile=/var/lib/minikube/build/build.2159911217 --output type=image,name=localhost/my-image:functional-805923
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:a3801d11254bd4c5966691f5a9c3985b413467274e3d2800644dc388098aff44
#8 exporting manifest sha256:a3801d11254bd4c5966691f5a9c3985b413467274e3d2800644dc388098aff44 0.0s done
#8 exporting config sha256:df166357d58155b27a236597df954b445b81890cac77c6e1ae21bcbe2012a439 0.0s done
#8 naming to localhost/my-image:functional-805923 done
#8 DONE 0.2s
I0120 11:49:08.724736  490812 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2159911217 --local dockerfile=/var/lib/minikube/build/build.2159911217 --output type=image,name=localhost/my-image:functional-805923: (3.111281215s)
I0120 11:49:08.724834  490812 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2159911217
I0120 11:49:08.738923  490812 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2159911217.tar
I0120 11:49:08.750025  490812 build_images.go:217] Built localhost/my-image:functional-805923 from /tmp/build.2159911217.tar
I0120 11:49:08.750056  490812 build_images.go:133] succeeded building to: functional-805923
I0120 11:49:08.750062  490812 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-805923
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image load --daemon kicbase/echo-server:functional-805923 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-805923 image load --daemon kicbase/echo-server:functional-805923 --alsologtostderr: (1.178611102s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image load --daemon kicbase/echo-server:functional-805923 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-805923
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image load --daemon kicbase/echo-server:functional-805923 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image save kicbase/echo-server:functional-805923 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image rm kicbase/echo-server:functional-805923 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-805923
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 image save --daemon kicbase/echo-server:functional-805923 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-805923
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 update-context --alsologtostderr -v=2
2025/01/20 11:49:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-805923 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.198.168 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-805923 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdany-port3301907214/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737373710472656990" to /tmp/TestFunctionalparallelMountCmdany-port3301907214/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737373710472656990" to /tmp/TestFunctionalparallelMountCmdany-port3301907214/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737373710472656990" to /tmp/TestFunctionalparallelMountCmdany-port3301907214/001/test-1737373710472656990
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.017271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:48:30.839668  451835 retry.go:31] will retry after 474.937662ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 11:48 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 11:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 11:48 test-1737373710472656990
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh cat /mount-9p/test-1737373710472656990
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-805923 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [867911d0-a4ba-4b27-ba65-b9b631d7979a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [867911d0-a4ba-4b27-ba65-b9b631d7979a] Running
helpers_test.go:344: "busybox-mount" [867911d0-a4ba-4b27-ba65-b9b631d7979a] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [867911d0-a4ba-4b27-ba65-b9b631d7979a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003463366s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-805923 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdany-port3301907214/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdspecific-port3111935590/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.429566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:48:39.062106  451835 retry.go:31] will retry after 430.983104ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdspecific-port3111935590/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh "sudo umount -f /mount-9p": exit status 1 (323.363482ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-805923 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdspecific-port3111935590/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdVerifyCleanup532640923/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdVerifyCleanup532640923/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdVerifyCleanup532640923/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T" /mount1: exit status 1 (966.884596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 11:48:41.828515  451835 retry.go:31] will retry after 361.393395ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-805923 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdVerifyCleanup532640923/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdVerifyCleanup532640923/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-805923 /tmp/TestFunctionalparallelMountCmdVerifyCleanup532640923/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-805923 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-805923 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-9kvcn" [548e9cdc-9119-467d-ac00-5ad8e1389393] Pending
helpers_test.go:344: "hello-node-64fc58db8c-9kvcn" [548e9cdc-9119-467d-ac00-5ad8e1389393] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005729675s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 service list -o json
functional_test.go:1494: Took "711.480107ms" to run "out/minikube-linux-arm64 -p functional-805923 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "599.439023ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "65.458286ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31667
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "448.397881ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "89.392237ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-805923 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31667
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-805923
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-805923
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-805923
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (135.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-949936 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 11:49:20.709509  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-949936 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m14.570108367s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (135.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- rollout status deployment/busybox
E0120 11:51:36.848184  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-949936 -- rollout status deployment/busybox: (30.359660685s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-772w4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-g94jq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-m5d95 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-772w4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-g94jq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-m5d95 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-772w4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-g94jq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-m5d95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-772w4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-772w4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-g94jq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-g94jq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-m5d95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0120 11:52:04.550831  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-949936 -- exec busybox-58667487b6-m5d95 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-949936 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-949936 -v=7 --alsologtostderr: (22.162193627s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-949936 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-949936 status --output json -v=7 --alsologtostderr: (1.056923181s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp testdata/cp-test.txt ha-949936:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2278872691/001/cp-test_ha-949936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936:/home/docker/cp-test.txt ha-949936-m02:/home/docker/cp-test_ha-949936_ha-949936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test_ha-949936_ha-949936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936:/home/docker/cp-test.txt ha-949936-m03:/home/docker/cp-test_ha-949936_ha-949936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test_ha-949936_ha-949936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936:/home/docker/cp-test.txt ha-949936-m04:/home/docker/cp-test_ha-949936_ha-949936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test_ha-949936_ha-949936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp testdata/cp-test.txt ha-949936-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2278872691/001/cp-test_ha-949936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m02:/home/docker/cp-test.txt ha-949936:/home/docker/cp-test_ha-949936-m02_ha-949936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test_ha-949936-m02_ha-949936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m02:/home/docker/cp-test.txt ha-949936-m03:/home/docker/cp-test_ha-949936-m02_ha-949936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test_ha-949936-m02_ha-949936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m02:/home/docker/cp-test.txt ha-949936-m04:/home/docker/cp-test_ha-949936-m02_ha-949936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test_ha-949936-m02_ha-949936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp testdata/cp-test.txt ha-949936-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2278872691/001/cp-test_ha-949936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m03:/home/docker/cp-test.txt ha-949936:/home/docker/cp-test_ha-949936-m03_ha-949936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test_ha-949936-m03_ha-949936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m03:/home/docker/cp-test.txt ha-949936-m02:/home/docker/cp-test_ha-949936-m03_ha-949936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test_ha-949936-m03_ha-949936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m03:/home/docker/cp-test.txt ha-949936-m04:/home/docker/cp-test_ha-949936-m03_ha-949936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test_ha-949936-m03_ha-949936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp testdata/cp-test.txt ha-949936-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2278872691/001/cp-test_ha-949936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m04:/home/docker/cp-test.txt ha-949936:/home/docker/cp-test_ha-949936-m04_ha-949936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936 "sudo cat /home/docker/cp-test_ha-949936-m04_ha-949936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m04:/home/docker/cp-test.txt ha-949936-m02:/home/docker/cp-test_ha-949936-m04_ha-949936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m02 "sudo cat /home/docker/cp-test_ha-949936-m04_ha-949936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 cp ha-949936-m04:/home/docker/cp-test.txt ha-949936-m03:/home/docker/cp-test_ha-949936-m04_ha-949936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 ssh -n ha-949936-m03 "sudo cat /home/docker/cp-test_ha-949936-m04_ha-949936-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-949936 node stop m02 -v=7 --alsologtostderr: (12.048098437s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr: exit status 7 (737.623673ms)

                                                
                                                
-- stdout --
	ha-949936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949936-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:53:01.022054  507674 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:53:01.022201  507674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:53:01.022226  507674 out.go:358] Setting ErrFile to fd 2...
	I0120 11:53:01.022245  507674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:53:01.022541  507674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 11:53:01.022761  507674 out.go:352] Setting JSON to false
	I0120 11:53:01.022830  507674 mustload.go:65] Loading cluster: ha-949936
	I0120 11:53:01.022934  507674 notify.go:220] Checking for updates...
	I0120 11:53:01.023356  507674 config.go:182] Loaded profile config "ha-949936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:53:01.023378  507674 status.go:174] checking status of ha-949936 ...
	I0120 11:53:01.023958  507674 cli_runner.go:164] Run: docker container inspect ha-949936 --format={{.State.Status}}
	I0120 11:53:01.044211  507674 status.go:371] ha-949936 host status = "Running" (err=<nil>)
	I0120 11:53:01.044234  507674 host.go:66] Checking if "ha-949936" exists ...
	I0120 11:53:01.044542  507674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-949936
	I0120 11:53:01.073697  507674 host.go:66] Checking if "ha-949936" exists ...
	I0120 11:53:01.074062  507674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:53:01.074147  507674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-949936
	I0120 11:53:01.102832  507674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/ha-949936/id_rsa Username:docker}
	I0120 11:53:01.195271  507674 ssh_runner.go:195] Run: systemctl --version
	I0120 11:53:01.200413  507674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:53:01.214084  507674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 11:53:01.270719  507674 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-20 11:53:01.260981762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 11:53:01.271356  507674 kubeconfig.go:125] found "ha-949936" server: "https://192.168.49.254:8443"
	I0120 11:53:01.271394  507674 api_server.go:166] Checking apiserver status ...
	I0120 11:53:01.271445  507674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:53:01.285084  507674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup
	I0120 11:53:01.295969  507674 api_server.go:182] apiserver freezer: "6:freezer:/docker/25db5e1e9ce2ccae0fd8c83c8e03ea84c3d8045af7558db73dbeb691897544f7/kubepods/burstable/pod9b9c33ab27fbd69171b7537f8a2cae7a/4516bba04a674f84b63253bde873b42eea3089947c2554a1e61afe3169601f30"
	I0120 11:53:01.296049  507674 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/25db5e1e9ce2ccae0fd8c83c8e03ea84c3d8045af7558db73dbeb691897544f7/kubepods/burstable/pod9b9c33ab27fbd69171b7537f8a2cae7a/4516bba04a674f84b63253bde873b42eea3089947c2554a1e61afe3169601f30/freezer.state
	I0120 11:53:01.305225  507674 api_server.go:204] freezer state: "THAWED"
	I0120 11:53:01.305259  507674 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 11:53:01.314075  507674 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 11:53:01.314103  507674 status.go:463] ha-949936 apiserver status = Running (err=<nil>)
	I0120 11:53:01.314113  507674 status.go:176] ha-949936 status: &{Name:ha-949936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:53:01.314134  507674 status.go:174] checking status of ha-949936-m02 ...
	I0120 11:53:01.314446  507674 cli_runner.go:164] Run: docker container inspect ha-949936-m02 --format={{.State.Status}}
	I0120 11:53:01.334005  507674 status.go:371] ha-949936-m02 host status = "Stopped" (err=<nil>)
	I0120 11:53:01.334047  507674 status.go:384] host is not running, skipping remaining checks
	I0120 11:53:01.334054  507674 status.go:176] ha-949936-m02 status: &{Name:ha-949936-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:53:01.334076  507674 status.go:174] checking status of ha-949936-m03 ...
	I0120 11:53:01.334475  507674 cli_runner.go:164] Run: docker container inspect ha-949936-m03 --format={{.State.Status}}
	I0120 11:53:01.352545  507674 status.go:371] ha-949936-m03 host status = "Running" (err=<nil>)
	I0120 11:53:01.352569  507674 host.go:66] Checking if "ha-949936-m03" exists ...
	I0120 11:53:01.353943  507674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-949936-m03
	I0120 11:53:01.373095  507674 host.go:66] Checking if "ha-949936-m03" exists ...
	I0120 11:53:01.373627  507674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:53:01.373688  507674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-949936-m03
	I0120 11:53:01.392109  507674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/ha-949936-m03/id_rsa Username:docker}
	I0120 11:53:01.478466  507674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:53:01.490601  507674 kubeconfig.go:125] found "ha-949936" server: "https://192.168.49.254:8443"
	I0120 11:53:01.490631  507674 api_server.go:166] Checking apiserver status ...
	I0120 11:53:01.490687  507674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 11:53:01.501846  507674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I0120 11:53:01.512262  507674 api_server.go:182] apiserver freezer: "6:freezer:/docker/d4df4da85fa85cb93e06938b87d6ee175fb30651f212f5284bb05d8d860cce65/kubepods/burstable/pod9926316df4613f3a504361cac2ff21eb/26f2e2d630781a3636327c52793bfcf48ae090a7b43321cf0a7097e3682a9438"
	I0120 11:53:01.512371  507674 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d4df4da85fa85cb93e06938b87d6ee175fb30651f212f5284bb05d8d860cce65/kubepods/burstable/pod9926316df4613f3a504361cac2ff21eb/26f2e2d630781a3636327c52793bfcf48ae090a7b43321cf0a7097e3682a9438/freezer.state
	I0120 11:53:01.525036  507674 api_server.go:204] freezer state: "THAWED"
	I0120 11:53:01.525070  507674 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 11:53:01.533630  507674 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 11:53:01.533660  507674 status.go:463] ha-949936-m03 apiserver status = Running (err=<nil>)
	I0120 11:53:01.533671  507674 status.go:176] ha-949936-m03 status: &{Name:ha-949936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:53:01.533704  507674 status.go:174] checking status of ha-949936-m04 ...
	I0120 11:53:01.534040  507674 cli_runner.go:164] Run: docker container inspect ha-949936-m04 --format={{.State.Status}}
	I0120 11:53:01.552658  507674 status.go:371] ha-949936-m04 host status = "Running" (err=<nil>)
	I0120 11:53:01.552685  507674 host.go:66] Checking if "ha-949936-m04" exists ...
	I0120 11:53:01.552985  507674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-949936-m04
	I0120 11:53:01.571982  507674 host.go:66] Checking if "ha-949936-m04" exists ...
	I0120 11:53:01.572305  507674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 11:53:01.572356  507674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-949936-m04
	I0120 11:53:01.592672  507674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/ha-949936-m04/id_rsa Username:docker}
	I0120 11:53:01.683028  507674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 11:53:01.698035  507674 status.go:176] ha-949936-m04 status: &{Name:ha-949936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-949936 node start m02 -v=7 --alsologtostderr: (17.474697285s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
E0120 11:53:20.774503  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:20.780837  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:20.792451  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:20.813663  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:20.855433  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:20.937393  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0120 11:53:21.098928  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:21.422085  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:22.067424  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-949936 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-949936 -v=7 --alsologtostderr
E0120 11:53:23.349664  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:25.910964  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:31.032909  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:53:41.274238  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-949936 -v=7 --alsologtostderr: (36.916168231s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-949936 --wait=true -v=7 --alsologtostderr
E0120 11:54:01.755719  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 11:54:42.717747  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-949936 --wait=true -v=7 --alsologtostderr: (1m50.085908179s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-949936
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-949936 node delete m03 -v=7 --alsologtostderr: (9.577245149s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 stop -v=7 --alsologtostderr
E0120 11:56:04.641069  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-949936 stop -v=7 --alsologtostderr: (35.750248655s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr: exit status 7 (116.781415ms)

                                                
                                                
-- stdout --
	ha-949936
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949936-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949936-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 11:56:36.533218  522215 out.go:345] Setting OutFile to fd 1 ...
	I0120 11:56:36.533393  522215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:56:36.533421  522215 out.go:358] Setting ErrFile to fd 2...
	I0120 11:56:36.533428  522215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 11:56:36.533885  522215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 11:56:36.534396  522215 out.go:352] Setting JSON to false
	I0120 11:56:36.534474  522215 mustload.go:65] Loading cluster: ha-949936
	I0120 11:56:36.534694  522215 notify.go:220] Checking for updates...
	I0120 11:56:36.534982  522215 config.go:182] Loaded profile config "ha-949936": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 11:56:36.535020  522215 status.go:174] checking status of ha-949936 ...
	I0120 11:56:36.535616  522215 cli_runner.go:164] Run: docker container inspect ha-949936 --format={{.State.Status}}
	I0120 11:56:36.555645  522215 status.go:371] ha-949936 host status = "Stopped" (err=<nil>)
	I0120 11:56:36.555666  522215 status.go:384] host is not running, skipping remaining checks
	I0120 11:56:36.555672  522215 status.go:176] ha-949936 status: &{Name:ha-949936 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:56:36.555699  522215 status.go:174] checking status of ha-949936-m02 ...
	I0120 11:56:36.556027  522215 cli_runner.go:164] Run: docker container inspect ha-949936-m02 --format={{.State.Status}}
	I0120 11:56:36.576960  522215 status.go:371] ha-949936-m02 host status = "Stopped" (err=<nil>)
	I0120 11:56:36.576983  522215 status.go:384] host is not running, skipping remaining checks
	I0120 11:56:36.576990  522215 status.go:176] ha-949936-m02 status: &{Name:ha-949936-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 11:56:36.577009  522215 status.go:174] checking status of ha-949936-m04 ...
	I0120 11:56:36.577315  522215 cli_runner.go:164] Run: docker container inspect ha-949936-m04 --format={{.State.Status}}
	I0120 11:56:36.599687  522215 status.go:371] ha-949936-m04 host status = "Stopped" (err=<nil>)
	I0120 11:56:36.599716  522215 status.go:384] host is not running, skipping remaining checks
	I0120 11:56:36.599723  522215 status.go:176] ha-949936-m04 status: &{Name:ha-949936-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-949936 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 11:56:36.848097  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-949936 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.110879694s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-949936 --control-plane -v=7 --alsologtostderr
E0120 11:58:20.773902  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-949936 --control-plane -v=7 --alsologtostderr: (43.308884411s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-949936 status -v=7 --alsologtostderr: (1.011642748s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.064753368s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-878981 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0120 11:58:48.483611  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-878981 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m19.1005713s)
--- PASS: TestJSONOutput/start/Command (79.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-878981 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-878981 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-878981 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-878981 --output=json --user=testUser: (5.882895082s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-821934 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-821934 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (109.321911ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6caee12-b2ac-4502-9d89-38a70e39689a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-821934] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c05ef39-c746-490e-9233-c0b147f0afc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20151"}}
	{"specversion":"1.0","id":"413727ac-0d5b-4ece-9bd2-cc1df849db3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"311416ac-b402-4bbd-979a-7bde6746f669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig"}}
	{"specversion":"1.0","id":"60fd0c61-130b-4a02-8c38-42065dbb1bfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube"}}
	{"specversion":"1.0","id":"8d5e48f3-8df6-4cf9-9f49-de582301d11a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e1dae0ab-039e-476c-9473-efdcb32546a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae32afb2-10c2-4551-9a97-6ff1c4e7da99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-821934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-821934
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-514219 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-514219 --network=: (37.304795557s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-514219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-514219
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-514219: (2.162412776s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-231106 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-231106 --network=bridge: (31.049190384s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-231106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-231106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-231106: (1.949638971s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.02s)

                                                
                                    
x
+
TestKicExistingNetwork (35.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0120 12:01:18.391097  451835 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0120 12:01:18.407805  451835 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0120 12:01:18.407884  451835 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0120 12:01:18.407901  451835 cli_runner.go:164] Run: docker network inspect existing-network
W0120 12:01:18.425080  451835 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0120 12:01:18.425109  451835 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0120 12:01:18.425123  451835 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0120 12:01:18.425231  451835 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 12:01:18.442962  451835 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ab00e182d66a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a6:06:fc:f6} reservation:<nil>}
I0120 12:01:18.443336  451835 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ce8f50}
I0120 12:01:18.443361  451835 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0120 12:01:18.443415  451835 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0120 12:01:18.531575  451835 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-602831 --network=existing-network
E0120 12:01:36.847978  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-602831 --network=existing-network: (33.164657291s)
helpers_test.go:175: Cleaning up "existing-network-602831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-602831
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-602831: (1.965326247s)
I0120 12:01:53.679021  451835 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.31s)

                                                
                                    
x
+
TestKicCustomSubnet (34.16s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-745041 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-745041 --subnet=192.168.60.0/24: (32.027866201s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-745041 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-745041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-745041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-745041: (2.106526446s)
--- PASS: TestKicCustomSubnet (34.16s)

                                                
                                    
x
+
TestKicStaticIP (33.27s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-002635 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-002635 --static-ip=192.168.200.200: (30.785922393s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-002635 ip
helpers_test.go:175: Cleaning up "static-ip-002635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-002635
E0120 12:02:59.912595  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-002635: (2.32369586s)
--- PASS: TestKicStaticIP (33.27s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-007315 --driver=docker  --container-runtime=containerd
E0120 12:03:20.774767  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-007315 --driver=docker  --container-runtime=containerd: (28.924453832s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-009916 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-009916 --driver=docker  --container-runtime=containerd: (31.70946146s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-007315
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-009916
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-009916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-009916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-009916: (2.064095648s)
helpers_test.go:175: Cleaning up "first-007315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-007315
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-007315: (1.961990878s)
--- PASS: TestMinikubeProfile (66.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-870089 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-870089 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.758716894s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-870089 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-872018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-872018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.420340647s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-872018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-870089 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-870089 --alsologtostderr -v=5: (1.624059829s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-872018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-872018
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-872018: (1.195998353s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-872018
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-872018: (6.121480253s)
--- PASS: TestMountStart/serial/RestartStopped (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-872018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-730292 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-730292 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m42.69728379s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-730292 -- rollout status deployment/busybox: (17.925665972s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0120 12:06:36.848049  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-cxlnp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-xr7fv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-cxlnp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-xr7fv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-cxlnp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-xr7fv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-cxlnp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-cxlnp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-xr7fv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-730292 -- exec busybox-58667487b6-xr7fv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-730292 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-730292 -v 3 --alsologtostderr: (17.931583657s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-730292 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp testdata/cp-test.txt multinode-730292:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1077784046/001/cp-test_multinode-730292.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292:/home/docker/cp-test.txt multinode-730292-m02:/home/docker/cp-test_multinode-730292_multinode-730292-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m02 "sudo cat /home/docker/cp-test_multinode-730292_multinode-730292-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292:/home/docker/cp-test.txt multinode-730292-m03:/home/docker/cp-test_multinode-730292_multinode-730292-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m03 "sudo cat /home/docker/cp-test_multinode-730292_multinode-730292-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp testdata/cp-test.txt multinode-730292-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1077784046/001/cp-test_multinode-730292-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292-m02:/home/docker/cp-test.txt multinode-730292:/home/docker/cp-test_multinode-730292-m02_multinode-730292.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292 "sudo cat /home/docker/cp-test_multinode-730292-m02_multinode-730292.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292-m02:/home/docker/cp-test.txt multinode-730292-m03:/home/docker/cp-test_multinode-730292-m02_multinode-730292-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m03 "sudo cat /home/docker/cp-test_multinode-730292-m02_multinode-730292-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp testdata/cp-test.txt multinode-730292-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1077784046/001/cp-test_multinode-730292-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292-m03:/home/docker/cp-test.txt multinode-730292:/home/docker/cp-test_multinode-730292-m03_multinode-730292.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292 "sudo cat /home/docker/cp-test_multinode-730292-m03_multinode-730292.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 cp multinode-730292-m03:/home/docker/cp-test.txt multinode-730292-m02:/home/docker/cp-test_multinode-730292-m03_multinode-730292-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 ssh -n multinode-730292-m02 "sudo cat /home/docker/cp-test_multinode-730292-m03_multinode-730292-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-730292 node stop m03: (1.217750589s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-730292 status: exit status 7 (520.241788ms)

                                                
                                                
-- stdout --
	multinode-730292
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-730292-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-730292-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr: exit status 7 (513.438984ms)

                                                
                                                
-- stdout --
	multinode-730292
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-730292-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-730292-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:07:10.807273  576417 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:07:10.807416  576417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:07:10.807441  576417 out.go:358] Setting ErrFile to fd 2...
	I0120 12:07:10.807453  576417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:07:10.807717  576417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 12:07:10.807956  576417 out.go:352] Setting JSON to false
	I0120 12:07:10.807995  576417 mustload.go:65] Loading cluster: multinode-730292
	I0120 12:07:10.808151  576417 notify.go:220] Checking for updates...
	I0120 12:07:10.808476  576417 config.go:182] Loaded profile config "multinode-730292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:07:10.808498  576417 status.go:174] checking status of multinode-730292 ...
	I0120 12:07:10.809068  576417 cli_runner.go:164] Run: docker container inspect multinode-730292 --format={{.State.Status}}
	I0120 12:07:10.829794  576417 status.go:371] multinode-730292 host status = "Running" (err=<nil>)
	I0120 12:07:10.829816  576417 host.go:66] Checking if "multinode-730292" exists ...
	I0120 12:07:10.830116  576417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-730292
	I0120 12:07:10.860042  576417 host.go:66] Checking if "multinode-730292" exists ...
	I0120 12:07:10.860345  576417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 12:07:10.860388  576417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-730292
	I0120 12:07:10.881101  576417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33307 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/multinode-730292/id_rsa Username:docker}
	I0120 12:07:10.966924  576417 ssh_runner.go:195] Run: systemctl --version
	I0120 12:07:10.971437  576417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:07:10.983102  576417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 12:07:11.036673  576417 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-20 12:07:11.026437976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0120 12:07:11.037426  576417 kubeconfig.go:125] found "multinode-730292" server: "https://192.168.67.2:8443"
	I0120 12:07:11.037463  576417 api_server.go:166] Checking apiserver status ...
	I0120 12:07:11.037510  576417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:07:11.048897  576417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I0120 12:07:11.058611  576417 api_server.go:182] apiserver freezer: "6:freezer:/docker/589c63ef664516c76fcc59566d1018fe70b1d524c60617819cfe247cf1735726/kubepods/burstable/podfe51cc1df1e14de1756140337e9c8f34/f4dff7c89b72d7c75c72723ae08798cc4c2f333f5ed32a78eb3b61efa0257f53"
	I0120 12:07:11.058696  576417 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/589c63ef664516c76fcc59566d1018fe70b1d524c60617819cfe247cf1735726/kubepods/burstable/podfe51cc1df1e14de1756140337e9c8f34/f4dff7c89b72d7c75c72723ae08798cc4c2f333f5ed32a78eb3b61efa0257f53/freezer.state
	I0120 12:07:11.068214  576417 api_server.go:204] freezer state: "THAWED"
	I0120 12:07:11.068246  576417 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0120 12:07:11.076326  576417 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0120 12:07:11.076372  576417 status.go:463] multinode-730292 apiserver status = Running (err=<nil>)
	I0120 12:07:11.076384  576417 status.go:176] multinode-730292 status: &{Name:multinode-730292 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:07:11.076411  576417 status.go:174] checking status of multinode-730292-m02 ...
	I0120 12:07:11.076736  576417 cli_runner.go:164] Run: docker container inspect multinode-730292-m02 --format={{.State.Status}}
	I0120 12:07:11.093997  576417 status.go:371] multinode-730292-m02 host status = "Running" (err=<nil>)
	I0120 12:07:11.094025  576417 host.go:66] Checking if "multinode-730292-m02" exists ...
	I0120 12:07:11.094332  576417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-730292-m02
	I0120 12:07:11.112016  576417 host.go:66] Checking if "multinode-730292-m02" exists ...
	I0120 12:07:11.112465  576417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 12:07:11.112515  576417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-730292-m02
	I0120 12:07:11.131409  576417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33312 SSHKeyPath:/home/jenkins/minikube-integration/20151-446459/.minikube/machines/multinode-730292-m02/id_rsa Username:docker}
	I0120 12:07:11.218679  576417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:07:11.230515  576417 status.go:176] multinode-730292-m02 status: &{Name:multinode-730292-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:07:11.230553  576417 status.go:174] checking status of multinode-730292-m03 ...
	I0120 12:07:11.230866  576417 cli_runner.go:164] Run: docker container inspect multinode-730292-m03 --format={{.State.Status}}
	I0120 12:07:11.249658  576417 status.go:371] multinode-730292-m03 host status = "Stopped" (err=<nil>)
	I0120 12:07:11.249687  576417 status.go:384] host is not running, skipping remaining checks
	I0120 12:07:11.249694  576417 status.go:176] multinode-730292-m03 status: &{Name:multinode-730292-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-730292 node start m03 -v=7 --alsologtostderr: (8.592902605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-730292
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-730292
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-730292: (24.924031648s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-730292 --wait=true -v=8 --alsologtostderr
E0120 12:08:20.774511  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-730292 --wait=true -v=8 --alsologtostderr: (1m0.618203936s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-730292
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-730292 node delete m03: (4.584626457s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-730292 stop: (23.723560924s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-730292 status: exit status 7 (99.992034ms)

                                                
                                                
-- stdout --
	multinode-730292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-730292-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr: exit status 7 (104.515435ms)

                                                
                                                
-- stdout --
	multinode-730292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-730292-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 12:09:15.415487  584380 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:09:15.415665  584380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:09:15.415678  584380 out.go:358] Setting ErrFile to fd 2...
	I0120 12:09:15.415683  584380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:09:15.415950  584380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20151-446459/.minikube/bin
	I0120 12:09:15.416168  584380 out.go:352] Setting JSON to false
	I0120 12:09:15.416218  584380 mustload.go:65] Loading cluster: multinode-730292
	I0120 12:09:15.416288  584380 notify.go:220] Checking for updates...
	I0120 12:09:15.417731  584380 config.go:182] Loaded profile config "multinode-730292": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 12:09:15.417766  584380 status.go:174] checking status of multinode-730292 ...
	I0120 12:09:15.418398  584380 cli_runner.go:164] Run: docker container inspect multinode-730292 --format={{.State.Status}}
	I0120 12:09:15.437028  584380 status.go:371] multinode-730292 host status = "Stopped" (err=<nil>)
	I0120 12:09:15.437050  584380 status.go:384] host is not running, skipping remaining checks
	I0120 12:09:15.437058  584380 status.go:176] multinode-730292 status: &{Name:multinode-730292 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 12:09:15.437087  584380 status.go:174] checking status of multinode-730292-m02 ...
	I0120 12:09:15.437416  584380 cli_runner.go:164] Run: docker container inspect multinode-730292-m02 --format={{.State.Status}}
	I0120 12:09:15.459191  584380 status.go:371] multinode-730292-m02 host status = "Stopped" (err=<nil>)
	I0120 12:09:15.459217  584380 status.go:384] host is not running, skipping remaining checks
	I0120 12:09:15.459224  584380 status.go:176] multinode-730292-m02 status: &{Name:multinode-730292-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-730292 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 12:09:43.844984  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-730292 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.923581072s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-730292 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-730292
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-730292-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-730292-m02 --driver=docker  --container-runtime=containerd: exit status 14 (110.992724ms)

                                                
                                                
-- stdout --
	* [multinode-730292-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-730292-m02' is duplicated with machine name 'multinode-730292-m02' in profile 'multinode-730292'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-730292-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-730292-m03 --driver=docker  --container-runtime=containerd: (32.970529362s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-730292
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-730292: exit status 80 (334.068913ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-730292 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-730292-m03 already exists in multinode-730292-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-730292-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-730292-m03: (2.015466335s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.49s)

                                                
                                    
x
+
TestPreload (115.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-691549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0120 12:11:36.847968  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-691549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.14007862s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-691549 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-691549 image pull gcr.io/k8s-minikube/busybox: (2.088262264s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-691549
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-691549: (11.991468389s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-691549 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-691549 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.808578093s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-691549 image list
helpers_test.go:175: Cleaning up "test-preload-691549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-691549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-691549: (2.422633103s)
--- PASS: TestPreload (115.83s)

                                                
                                    
x
+
TestScheduledStopUnix (105.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-661545 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-661545 --memory=2048 --driver=docker  --container-runtime=containerd: (30.081362932s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-661545 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-661545 -n scheduled-stop-661545
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-661545 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 12:13:11.077800  451835 retry.go:31] will retry after 87.147µs: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.078955  451835 retry.go:31] will retry after 126.502µs: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.080099  451835 retry.go:31] will retry after 269.792µs: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.081191  451835 retry.go:31] will retry after 349.54µs: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.082304  451835 retry.go:31] will retry after 435.094µs: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.083385  451835 retry.go:31] will retry after 622.504µs: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.084494  451835 retry.go:31] will retry after 1.499776ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.086691  451835 retry.go:31] will retry after 2.033721ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.088874  451835 retry.go:31] will retry after 1.485672ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.091388  451835 retry.go:31] will retry after 4.650788ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.096840  451835 retry.go:31] will retry after 7.052308ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.104191  451835 retry.go:31] will retry after 6.249262ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.111447  451835 retry.go:31] will retry after 18.968708ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.130804  451835 retry.go:31] will retry after 28.459332ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.160169  451835 retry.go:31] will retry after 33.01165ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
I0120 12:13:11.193353  451835 retry.go:31] will retry after 46.48915ms: open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/scheduled-stop-661545/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-661545 --cancel-scheduled
E0120 12:13:20.774788  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-661545 -n scheduled-stop-661545
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-661545
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-661545 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-661545
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-661545: exit status 7 (69.57702ms)

                                                
                                                
-- stdout --
	scheduled-stop-661545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-661545 -n scheduled-stop-661545
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-661545 -n scheduled-stop-661545: exit status 7 (71.423778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-661545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-661545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-661545: (4.045273757s)
--- PASS: TestScheduledStopUnix (105.74s)

                                                
                                    
x
+
TestInsufficientStorage (10.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-824326 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-824326 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.435625063s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"57afa856-e77e-48bf-8038-d55e0487676b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-824326] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bbd3a48-c00b-436e-8115-13a909bb5e0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20151"}}
	{"specversion":"1.0","id":"66b99fd1-43b4-433c-82e5-25a1edf38b6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6ba414b1-d438-47c2-b0d6-6ed70078796b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig"}}
	{"specversion":"1.0","id":"0f664dfd-a046-4495-bdcb-9c974b21503b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube"}}
	{"specversion":"1.0","id":"c893c4d1-b0a5-4efa-a2ac-19cfc1976c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"62398e58-3294-45ac-9a05-496039e75434","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c76b776f-1716-471d-abcc-58e745eff9b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d436d415-8c0d-4865-a78d-dd839571e005","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b3e5d899-9ad6-4ddb-b2a4-9dc806896c5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4a5d70d-2c48-436b-a31a-715e4218a737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"88ae82f9-5b78-446a-93d6-a4d25d1ee8ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-824326\" primary control-plane node in \"insufficient-storage-824326\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e3056d7-56ab-4962-9587-ea39e5f5ec03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b55c00c-7cd4-434d-8941-fc55e3b5cb64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"13c935c0-1fcd-492b-8a08-b25fefb941bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-824326 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-824326 --output=json --layout=cluster: exit status 7 (289.614456ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-824326","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-824326","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 12:14:34.919160  603201 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-824326" does not appear in /home/jenkins/minikube-integration/20151-446459/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-824326 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-824326 --output=json --layout=cluster: exit status 7 (290.354702ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-824326","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-824326","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 12:14:35.210613  603263 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-824326" does not appear in /home/jenkins/minikube-integration/20151-446459/kubeconfig
	E0120 12:14:35.220832  603263 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/insufficient-storage-824326/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-824326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-824326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-824326: (1.89696116s)
--- PASS: TestInsufficientStorage (10.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2102781855 start -p running-upgrade-262051 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0120 12:19:39.914743  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2102781855 start -p running-upgrade-262051 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.176198675s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-262051 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-262051 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.640027661s)
helpers_test.go:175: Cleaning up "running-upgrade-262051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-262051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-262051: (2.669168403s)
--- PASS: TestRunningBinaryUpgrade (87.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (346.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.477289112s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-272158
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-272158: (1.569352376s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-272158 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-272158 status --format={{.Host}}: exit status 7 (198.638249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m35.695978342s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-272158 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (109.222989ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-272158] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-272158
	    minikube start -p kubernetes-upgrade-272158 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2721582 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-272158 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-272158 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.126382115s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-272158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-272158
E0120 12:21:36.847832  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-272158: (2.238993749s)
--- PASS: TestKubernetesUpgrade (346.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1956752181 start -p missing-upgrade-374342 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1956752181 start -p missing-upgrade-374342 --memory=2200 --driver=docker  --container-runtime=containerd: (1m31.593628444s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-374342
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-374342: (10.41110297s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-374342
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-374342 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0120 12:16:36.848202  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-374342 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.836622485s)
helpers_test.go:175: Cleaning up "missing-upgrade-374342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-374342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-374342: (2.31413569s)
--- PASS: TestMissingContainerUpgrade (166.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-437547 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-437547 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (92.902667ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-437547] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20151
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20151-446459/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20151-446459/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-437547 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-437547 --driver=docker  --container-runtime=containerd: (38.883454586s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-437547 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-437547 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-437547 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.987475871s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-437547 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-437547 status -o json: exit status 2 (297.144256ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-437547","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-437547
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-437547: (1.867636242s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-437547 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-437547 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.393800793s)
--- PASS: TestNoKubernetes/serial/Start (6.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-437547 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-437547 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.857574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-437547
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-437547: (1.216673815s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-437547 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-437547 --driver=docker  --container-runtime=containerd: (7.466499781s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-437547 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-437547 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.01732ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4147858264 start -p stopped-upgrade-664038 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4147858264 start -p stopped-upgrade-664038 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.055514564s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4147858264 -p stopped-upgrade-664038 stop
E0120 12:18:20.774858  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4147858264 -p stopped-upgrade-664038 stop: (19.907884627s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-664038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-664038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.673897026s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-664038
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-664038: (1.197394812s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (65.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-475739 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-475739 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m5.150255768s)
--- PASS: TestPause/serial/Start (65.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-475739 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-475739 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.894751003s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.93s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-475739 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-475739 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-475739 --output=json --layout=cluster: exit status 2 (413.709931ms)

                                                
                                                
-- stdout --
	{"Name":"pause-475739","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-475739","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-475739 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-475739 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-475739 --alsologtostderr -v=5: (1.045455406s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-475739 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-475739 --alsologtostderr -v=5: (2.854161171s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-475739
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-475739: exit status 1 (19.760968ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-475739: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (177.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-618033 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m57.276853789s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (177.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-800877 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 12:26:23.847146  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-800877 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (52.903600645s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-618033 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a19edb14-ee6d-4a3c-bcf9-05cb759662b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a19edb14-ee6d-4a3c-bcf9-05cb759662b2] Running
E0120 12:26:36.848045  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004055707s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-618033 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-618033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-618033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023715098s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-618033 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-618033 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-618033 --alsologtostderr -v=3: (12.333244944s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-618033 -n old-k8s-version-618033
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-618033 -n old-k8s-version-618033: exit status 7 (74.060276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-618033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800877 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b12373a-66ab-4fce-b114-1759d475f3da] Pending
helpers_test.go:344: "busybox" [2b12373a-66ab-4fce-b114-1759d475f3da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2b12373a-66ab-4fce-b114-1759d475f3da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003678578s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800877 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-800877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-800877 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.566979948s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-800877 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-800877 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-800877 --alsologtostderr -v=3: (12.303717134s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877: exit status 7 (74.749549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-800877 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-800877 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 12:28:20.774184  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:31:36.848154  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-800877 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m36.313104189s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4rd5l" [4b2e6579-0f5e-4ce1-b7cc-bc0a7fcc0260] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00433195s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4rd5l" [4b2e6579-0f5e-4ce1-b7cc-bc0a7fcc0260] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00423027s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-800877 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-800877 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-800877 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877: exit status 2 (339.471653ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877: exit status 2 (339.563486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-800877 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-800877 -n default-k8s-diff-port-800877
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-180778 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-180778 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (54.863961449s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g46zv" [e2a2e1a2-9378-44ed-a49b-d4e96bbf1591] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005569794s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-180778 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b2fe0f0-50f2-493b-857f-e93555709025] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b2fe0f0-50f2-493b-857f-e93555709025] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005951417s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-180778 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g46zv" [e2a2e1a2-9378-44ed-a49b-d4e96bbf1591] Running
E0120 12:33:20.774343  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003963781s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-618033 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-618033 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-618033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-618033 -n old-k8s-version-618033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-618033 -n old-k8s-version-618033: exit status 2 (418.310768ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-618033 -n old-k8s-version-618033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-618033 -n old-k8s-version-618033: exit status 2 (411.864325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-618033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-618033 --alsologtostderr -v=1: (1.132427722s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-618033 -n old-k8s-version-618033
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-618033 -n old-k8s-version-618033
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-180778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-180778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.27395129s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-180778 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-180778 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-180778 --alsologtostderr -v=3: (12.293668161s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-717328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-717328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m16.857485012s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-180778 -n embed-certs-180778
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-180778 -n embed-certs-180778: exit status 7 (112.958134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-180778 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-180778 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-180778 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m30.457634585s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-180778 -n embed-certs-180778
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-717328 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [09b1d431-b0b6-45d9-9ae0-bfe8a13c8056] Pending
helpers_test.go:344: "busybox" [09b1d431-b0b6-45d9-9ae0-bfe8a13c8056] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [09b1d431-b0b6-45d9-9ae0-bfe8a13c8056] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00386219s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-717328 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-717328 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-717328 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079716708s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-717328 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-717328 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-717328 --alsologtostderr -v=3: (12.000533534s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-717328 -n no-preload-717328
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-717328 -n no-preload-717328: exit status 7 (84.797611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-717328 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-717328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 12:36:19.916016  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.435923  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.442374  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.453829  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.475253  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.516736  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.598254  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:28.759668  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:29.081123  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:29.723056  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:31.004388  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:33.565881  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:36.848125  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/addons-272194/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:38.687575  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:36:48.929805  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:00.970381  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:00.976882  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:00.988274  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:01.009917  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:01.051386  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:01.132853  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:01.294369  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:01.616584  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:02.258621  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:03.540787  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:06.102975  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:09.412087  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:11.224622  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:21.467406  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:41.948858  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:37:50.373979  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-717328 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m28.669193158s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-717328 -n no-preload-717328
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-djxp8" [55f383cd-69e1-4838-9884-b0bb19500db7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003909834s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-djxp8" [55f383cd-69e1-4838-9884-b0bb19500db7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003983524s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-180778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-180778 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-180778 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-180778 -n embed-certs-180778
E0120 12:38:20.774081  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/functional-805923/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-180778 -n embed-certs-180778: exit status 2 (328.108007ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-180778 -n embed-certs-180778
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-180778 -n embed-certs-180778: exit status 2 (339.671157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-180778 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-180778 -n embed-certs-180778
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-180778 -n embed-certs-180778
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-124445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-124445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (35.803324694s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-124445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-124445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117037247s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-124445 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-124445 --alsologtostderr -v=3: (1.266880715s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124445 -n newest-cni-124445
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124445 -n newest-cni-124445: exit status 7 (88.791839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-124445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-124445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 12:39:12.295602  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/old-k8s-version-618033/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-124445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (16.272737977s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-124445 -n newest-cni-124445
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-124445 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-124445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124445 -n newest-cni-124445
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124445 -n newest-cni-124445: exit status 2 (395.141747ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-124445 -n newest-cni-124445
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-124445 -n newest-cni-124445: exit status 2 (414.706955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-124445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-124445 -n newest-cni-124445
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-124445 -n newest-cni-124445
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qjwzp" [565eba10-a913-41ec-9df4-deaaf494ca28] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005061s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qjwzp" [565eba10-a913-41ec-9df4-deaaf494ca28] Running
E0120 12:39:44.833122  451835 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20151-446459/.minikube/profiles/default-k8s-diff-port-800877/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004267325s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-717328 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-717328 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    

Test skip (27/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-011586 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-011586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-011586
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-065527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-065527
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard