Test Report: Docker_Linux_containerd_arm64 20385

                    
                      693540c0733dd51efa818bcfa77a0c31e0bd95f4:2025-02-10:38290
                    
                

Test fail (1/331)

Order failed test Duration
305 TestStartStop/group/old-k8s-version/serial/SecondStart 383.04
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (383.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.199616224s)

                                                
                                                
-- stdout --
	* [old-k8s-version-705847] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-705847" primary control-plane node in "old-k8s-version-705847" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-705847" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-705847 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:11:15.854169  792122 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:11:15.854414  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:11:15.854442  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:11:15.854464  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:11:15.854732  792122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 11:11:15.855143  792122 out.go:352] Setting JSON to false
	I0210 11:11:15.856189  792122 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14021,"bootTime":1739171855,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 11:11:15.856293  792122 start.go:139] virtualization:  
	I0210 11:11:15.861413  792122 out.go:177] * [old-k8s-version-705847] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0210 11:11:15.864473  792122 notify.go:220] Checking for updates...
	I0210 11:11:15.867546  792122 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:11:15.870370  792122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:11:15.873166  792122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 11:11:15.876049  792122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 11:11:15.878962  792122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0210 11:11:15.881652  792122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:11:15.884853  792122 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0210 11:11:15.888460  792122 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0210 11:11:15.891248  792122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:11:15.926905  792122 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 11:11:15.927039  792122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 11:11:16.025922  792122 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-10 11:11:16.013196367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 11:11:16.026050  792122 docker.go:318] overlay module found
	I0210 11:11:16.029075  792122 out.go:177] * Using the docker driver based on existing profile
	I0210 11:11:16.031808  792122 start.go:297] selected driver: docker
	I0210 11:11:16.031835  792122 start.go:901] validating driver "docker" against &{Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:11:16.031955  792122 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:11:16.032694  792122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 11:11:16.121670  792122 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-10 11:11:16.110854411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 11:11:16.122055  792122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:11:16.122080  792122 cni.go:84] Creating CNI manager for ""
	I0210 11:11:16.122120  792122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 11:11:16.122159  792122 start.go:340] cluster config:
	{Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:11:16.125339  792122 out.go:177] * Starting "old-k8s-version-705847" primary control-plane node in "old-k8s-version-705847" cluster
	I0210 11:11:16.128143  792122 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 11:11:16.131258  792122 out.go:177] * Pulling base image v0.0.46 ...
	I0210 11:11:16.134026  792122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0210 11:11:16.134092  792122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0210 11:11:16.134103  792122 cache.go:56] Caching tarball of preloaded images
	I0210 11:11:16.134211  792122 preload.go:172] Found /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0210 11:11:16.134224  792122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0210 11:11:16.134356  792122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/config.json ...
	I0210 11:11:16.134591  792122 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 11:11:16.162248  792122 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0210 11:11:16.162277  792122 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0210 11:11:16.162291  792122 cache.go:230] Successfully downloaded all kic artifacts
	I0210 11:11:16.162314  792122 start.go:360] acquireMachinesLock for old-k8s-version-705847: {Name:mk6cce887f4e2ae32173ee31c8bf770fec39b41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:11:16.162365  792122 start.go:364] duration metric: took 33.788µs to acquireMachinesLock for "old-k8s-version-705847"
	I0210 11:11:16.162383  792122 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:11:16.162388  792122 fix.go:54] fixHost starting: 
	I0210 11:11:16.162640  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:16.191549  792122 fix.go:112] recreateIfNeeded on old-k8s-version-705847: state=Stopped err=<nil>
	W0210 11:11:16.191578  792122 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:11:16.194992  792122 out.go:177] * Restarting existing docker container for "old-k8s-version-705847" ...
	I0210 11:11:16.197848  792122 cli_runner.go:164] Run: docker start old-k8s-version-705847
	I0210 11:11:16.569523  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:16.593452  792122 kic.go:430] container "old-k8s-version-705847" state is running.
	I0210 11:11:16.593947  792122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-705847
	I0210 11:11:16.620893  792122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/config.json ...
	I0210 11:11:16.621115  792122 machine.go:93] provisionDockerMachine start ...
	I0210 11:11:16.621191  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:16.647808  792122 main.go:141] libmachine: Using SSH client type: native
	I0210 11:11:16.648075  792122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0210 11:11:16.648091  792122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:11:16.649710  792122 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0210 11:11:19.784831  792122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-705847
	
	I0210 11:11:19.784909  792122 ubuntu.go:169] provisioning hostname "old-k8s-version-705847"
	I0210 11:11:19.784995  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:19.811436  792122 main.go:141] libmachine: Using SSH client type: native
	I0210 11:11:19.811683  792122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0210 11:11:19.811695  792122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-705847 && echo "old-k8s-version-705847" | sudo tee /etc/hostname
	I0210 11:11:19.968816  792122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-705847
	
	I0210 11:11:19.968987  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:19.995722  792122 main.go:141] libmachine: Using SSH client type: native
	I0210 11:11:19.996000  792122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil>  [] 0s} 127.0.0.1 33798 <nil> <nil>}
	I0210 11:11:19.996018  792122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-705847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-705847/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-705847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:11:20.134280  792122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:11:20.134372  792122 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20385-576242/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-576242/.minikube}
	I0210 11:11:20.134446  792122 ubuntu.go:177] setting up certificates
	I0210 11:11:20.134481  792122 provision.go:84] configureAuth start
	I0210 11:11:20.134562  792122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-705847
	I0210 11:11:20.165701  792122 provision.go:143] copyHostCerts
	I0210 11:11:20.165771  792122 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem, removing ...
	I0210 11:11:20.165781  792122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem
	I0210 11:11:20.165863  792122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem (1078 bytes)
	I0210 11:11:20.165974  792122 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem, removing ...
	I0210 11:11:20.165980  792122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem
	I0210 11:11:20.166009  792122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem (1123 bytes)
	I0210 11:11:20.166073  792122 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem, removing ...
	I0210 11:11:20.166078  792122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem
	I0210 11:11:20.166102  792122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem (1679 bytes)
	I0210 11:11:20.166159  792122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-705847 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-705847]
	I0210 11:11:20.587091  792122 provision.go:177] copyRemoteCerts
	I0210 11:11:20.587207  792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:11:20.587265  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:20.605581  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:20.706961  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:11:20.748289  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0210 11:11:20.791018  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:11:20.835241  792122 provision.go:87] duration metric: took 700.735484ms to configureAuth
	I0210 11:11:20.835270  792122 ubuntu.go:193] setting minikube options for container-runtime
	I0210 11:11:20.835476  792122 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0210 11:11:20.835490  792122 machine.go:96] duration metric: took 4.214359793s to provisionDockerMachine
	I0210 11:11:20.835499  792122 start.go:293] postStartSetup for "old-k8s-version-705847" (driver="docker")
	I0210 11:11:20.835516  792122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:11:20.835573  792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:11:20.835624  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:20.869011  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:20.963656  792122 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:11:20.967660  792122 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0210 11:11:20.967701  792122 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0210 11:11:20.967713  792122 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0210 11:11:20.967721  792122 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0210 11:11:20.967734  792122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/addons for local assets ...
	I0210 11:11:20.967794  792122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/files for local assets ...
	I0210 11:11:20.967878  792122 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem -> 5816292.pem in /etc/ssl/certs
	I0210 11:11:20.968001  792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:11:20.978889  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /etc/ssl/certs/5816292.pem (1708 bytes)
	I0210 11:11:21.008291  792122 start.go:296] duration metric: took 172.771886ms for postStartSetup
	I0210 11:11:21.008402  792122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:11:21.008470  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:21.037623  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:21.134522  792122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0210 11:11:21.142110  792122 fix.go:56] duration metric: took 4.979713355s for fixHost
	I0210 11:11:21.142147  792122 start.go:83] releasing machines lock for "old-k8s-version-705847", held for 4.979774146s
	I0210 11:11:21.142239  792122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-705847
	I0210 11:11:21.177817  792122 ssh_runner.go:195] Run: cat /version.json
	I0210 11:11:21.177878  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:21.178151  792122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:11:21.178225  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:21.217746  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:21.218527  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:21.480083  792122 ssh_runner.go:195] Run: systemctl --version
	I0210 11:11:21.492916  792122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:11:21.497852  792122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0210 11:11:21.543157  792122 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0210 11:11:21.543275  792122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:11:21.559960  792122 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0210 11:11:21.559988  792122 start.go:495] detecting cgroup driver to use...
	I0210 11:11:21.560053  792122 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0210 11:11:21.560126  792122 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:11:21.579682  792122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:11:21.597064  792122 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:11:21.597162  792122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:11:21.618078  792122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:11:21.635461  792122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:11:21.805663  792122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:11:21.950950  792122 docker.go:233] disabling docker service ...
	I0210 11:11:21.951058  792122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:11:21.964888  792122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:11:21.977757  792122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:11:22.145793  792122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:11:22.290886  792122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:11:22.307242  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:11:22.336235  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0210 11:11:22.347968  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:11:22.359251  792122 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:11:22.359351  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:11:22.373782  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:11:22.391129  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:11:22.403898  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:11:22.419027  792122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:11:22.431573  792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:11:22.443361  792122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:11:22.455683  792122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:11:22.463989  792122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:11:22.619380  792122 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:11:22.860525  792122 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0210 11:11:22.860623  792122 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0210 11:11:22.865569  792122 start.go:563] Will wait 60s for crictl version
	I0210 11:11:22.865662  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:11:22.874367  792122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:11:22.979096  792122 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0210 11:11:22.979194  792122 ssh_runner.go:195] Run: containerd --version
	I0210 11:11:23.006049  792122 ssh_runner.go:195] Run: containerd --version
	I0210 11:11:23.032612  792122 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0210 11:11:23.035483  792122 cli_runner.go:164] Run: docker network inspect old-k8s-version-705847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 11:11:23.056374  792122 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0210 11:11:23.060980  792122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:11:23.072675  792122 kubeadm.go:883] updating cluster {Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:11:23.072806  792122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0210 11:11:23.072868  792122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:11:23.126444  792122 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 11:11:23.126472  792122 containerd.go:534] Images already preloaded, skipping extraction
	I0210 11:11:23.126540  792122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:11:23.171932  792122 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 11:11:23.171956  792122 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:11:23.171965  792122 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0210 11:11:23.172120  792122 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-705847 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:11:23.172206  792122 ssh_runner.go:195] Run: sudo crictl info
	I0210 11:11:23.231989  792122 cni.go:84] Creating CNI manager for ""
	I0210 11:11:23.232018  792122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 11:11:23.232033  792122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:11:23.232055  792122 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-705847 NodeName:old-k8s-version-705847 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0210 11:11:23.232196  792122 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-705847"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:11:23.232269  792122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0210 11:11:23.247020  792122 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:11:23.247115  792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:11:23.256261  792122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0210 11:11:23.275387  792122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:11:23.295903  792122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0210 11:11:23.315336  792122 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0210 11:11:23.319026  792122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:11:23.331205  792122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:11:23.440997  792122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:11:23.458082  792122 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847 for IP: 192.168.76.2
	I0210 11:11:23.458105  792122 certs.go:194] generating shared ca certs ...
	I0210 11:11:23.458121  792122 certs.go:226] acquiring lock for ca certs: {Name:mk41210dcb5a25827819de2f65fc930debb2adb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:11:23.458327  792122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key
	I0210 11:11:23.458397  792122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key
	I0210 11:11:23.458412  792122 certs.go:256] generating profile certs ...
	I0210 11:11:23.458516  792122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.key
	I0210 11:11:23.458611  792122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/apiserver.key.135f3f41
	I0210 11:11:23.458701  792122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/proxy-client.key
	I0210 11:11:23.458860  792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem (1338 bytes)
	W0210 11:11:23.458916  792122 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629_empty.pem, impossibly tiny 0 bytes
	I0210 11:11:23.458932  792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:11:23.458973  792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:11:23.459027  792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:11:23.459064  792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem (1679 bytes)
	I0210 11:11:23.459142  792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem (1708 bytes)
	I0210 11:11:23.459782  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:11:23.536925  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:11:23.621168  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:11:23.689098  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:11:23.725956  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0210 11:11:23.765962  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 11:11:23.807991  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:11:23.858636  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:11:23.897694  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /usr/share/ca-certificates/5816292.pem (1708 bytes)
	I0210 11:11:23.944379  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:11:23.985045  792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem --> /usr/share/ca-certificates/581629.pem (1338 bytes)
	I0210 11:11:24.036575  792122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:11:24.073725  792122 ssh_runner.go:195] Run: openssl version
	I0210 11:11:24.083331  792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/581629.pem && ln -fs /usr/share/ca-certificates/581629.pem /etc/ssl/certs/581629.pem"
	I0210 11:11:24.103811  792122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/581629.pem
	I0210 11:11:24.112042  792122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:32 /usr/share/ca-certificates/581629.pem
	I0210 11:11:24.112156  792122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/581629.pem
	I0210 11:11:24.126986  792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/581629.pem /etc/ssl/certs/51391683.0"
	I0210 11:11:24.145917  792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5816292.pem && ln -fs /usr/share/ca-certificates/5816292.pem /etc/ssl/certs/5816292.pem"
	I0210 11:11:24.160706  792122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5816292.pem
	I0210 11:11:24.164475  792122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:32 /usr/share/ca-certificates/5816292.pem
	I0210 11:11:24.164586  792122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5816292.pem
	I0210 11:11:24.175705  792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5816292.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:11:24.189373  792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:11:24.198520  792122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:11:24.205612  792122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:11:24.205713  792122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:11:24.215619  792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:11:24.224354  792122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:11:24.233235  792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 11:11:24.244555  792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 11:11:24.252060  792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 11:11:24.262036  792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 11:11:24.269421  792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 11:11:24.283495  792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 11:11:24.293463  792122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:11:24.293640  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0210 11:11:24.293728  792122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:11:24.390830  792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:11:24.390915  792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:11:24.390935  792122 cri.go:89] found id: "3a1155cdb6488532d05c7f84248ca7fed91cf6700ec92941d37ec310ac01c20e"
	I0210 11:11:24.390954  792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:11:24.390985  792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:11:24.391004  792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:11:24.391022  792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:11:24.391041  792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:11:24.391075  792122 cri.go:89] found id: ""
	I0210 11:11:24.391161  792122 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0210 11:11:24.407017  792122 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-02-10T11:11:24Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0210 11:11:24.407151  792122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:11:24.418766  792122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 11:11:24.418844  792122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 11:11:24.418925  792122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 11:11:24.432478  792122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:11:24.433021  792122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-705847" does not appear in /home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 11:11:24.433184  792122 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-576242/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-705847" cluster setting kubeconfig missing "old-k8s-version-705847" context setting]
	I0210 11:11:24.433600  792122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/kubeconfig: {Name:mkb94ed977d6ca716789df506e8beb4caa6483af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:11:24.435185  792122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 11:11:24.447496  792122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0210 11:11:24.447575  792122 kubeadm.go:597] duration metric: took 28.708683ms to restartPrimaryControlPlane
	I0210 11:11:24.447599  792122 kubeadm.go:394] duration metric: took 154.146907ms to StartCluster
	I0210 11:11:24.447637  792122 settings.go:142] acquiring lock: {Name:mk7602bd83375ef51e640bdffea1b5615cccb289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:11:24.447719  792122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 11:11:24.448363  792122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/kubeconfig: {Name:mkb94ed977d6ca716789df506e8beb4caa6483af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:11:24.449352  792122 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0210 11:11:24.449428  792122 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0210 11:11:24.449493  792122 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:11:24.449869  792122 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-705847"
	I0210 11:11:24.449888  792122 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-705847"
	W0210 11:11:24.449895  792122 addons.go:247] addon storage-provisioner should already be in state true
	I0210 11:11:24.449944  792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
	I0210 11:11:24.450507  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:24.450741  792122 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-705847"
	I0210 11:11:24.450777  792122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-705847"
	I0210 11:11:24.451238  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:24.452288  792122 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-705847"
	I0210 11:11:24.452308  792122 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-705847"
	W0210 11:11:24.452316  792122 addons.go:247] addon metrics-server should already be in state true
	I0210 11:11:24.452348  792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
	I0210 11:11:24.452758  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:24.453089  792122 addons.go:69] Setting dashboard=true in profile "old-k8s-version-705847"
	I0210 11:11:24.453127  792122 addons.go:238] Setting addon dashboard=true in "old-k8s-version-705847"
	W0210 11:11:24.453138  792122 addons.go:247] addon dashboard should already be in state true
	I0210 11:11:24.453164  792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
	I0210 11:11:24.453680  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:24.460143  792122 out.go:177] * Verifying Kubernetes components...
	I0210 11:11:24.465674  792122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:11:24.507561  792122 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0210 11:11:24.511359  792122 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0210 11:11:24.514349  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0210 11:11:24.514376  792122 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0210 11:11:24.514452  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:24.523507  792122 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-705847"
	W0210 11:11:24.523530  792122 addons.go:247] addon default-storageclass should already be in state true
	I0210 11:11:24.523555  792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
	I0210 11:11:24.523956  792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
	I0210 11:11:24.546433  792122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:11:24.552501  792122 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:11:24.552521  792122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:11:24.552588  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:24.561561  792122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0210 11:11:24.569573  792122 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 11:11:24.569610  792122 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 11:11:24.569680  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:24.583548  792122 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:11:24.583583  792122 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:11:24.583656  792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
	I0210 11:11:24.603286  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:24.624884  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:24.655484  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:24.656660  792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
	I0210 11:11:24.797935  792122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:11:24.853207  792122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 11:11:24.853227  792122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0210 11:11:24.860987  792122 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-705847" to be "Ready" ...
	I0210 11:11:24.899619  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0210 11:11:24.899642  792122 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0210 11:11:24.902356  792122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 11:11:24.902376  792122 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 11:11:24.940209  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:11:24.956611  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0210 11:11:24.956634  792122 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0210 11:11:24.972760  792122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:24.972784  792122 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 11:11:24.989039  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:11:25.038226  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:25.048383  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0210 11:11:25.048410  792122 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0210 11:11:25.210003  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0210 11:11:25.210027  792122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0210 11:11:25.258029  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.258066  792122 retry.go:31] will retry after 354.239843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.304724  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0210 11:11:25.304746  792122 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0210 11:11:25.318737  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.318794  792122 retry.go:31] will retry after 165.988594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.356062  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0210 11:11:25.356087  792122 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0210 11:11:25.358528  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.358558  792122 retry.go:31] will retry after 218.751579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.380342  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0210 11:11:25.380407  792122 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0210 11:11:25.403764  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0210 11:11:25.403790  792122 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0210 11:11:25.422120  792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:11:25.422144  792122 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0210 11:11:25.441134  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:11:25.485496  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0210 11:11:25.530016  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.530050  792122 retry.go:31] will retry after 272.072779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:25.568506  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.568587  792122 retry.go:31] will retry after 436.399785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.577736  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:25.613075  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0210 11:11:25.719121  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.719159  792122 retry.go:31] will retry after 218.411415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:25.748822  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.748859  792122 retry.go:31] will retry after 286.400128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.803180  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0210 11:11:25.904297  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.904340  792122 retry.go:31] will retry after 279.923457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:25.938645  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:26.006045  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:11:26.036399  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0210 11:11:26.060096  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.060130  792122 retry.go:31] will retry after 704.765952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.184442  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0210 11:11:26.295511  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.295546  792122 retry.go:31] will retry after 443.099927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:26.295643  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.295677  792122 retry.go:31] will retry after 680.096408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:26.363775  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.363808  792122 retry.go:31] will retry after 708.016662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.739422  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:11:26.765814  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:26.862494  792122 node_ready.go:53] error getting node "old-k8s-version-705847": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-705847": dial tcp 192.168.76.2:8443: connect: connection refused
	W0210 11:11:26.901986  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.902022  792122 retry.go:31] will retry after 1.10804755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:26.921795  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.921879  792122 retry.go:31] will retry after 1.043194883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:26.976183  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:11:27.072634  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0210 11:11:27.090874  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:27.090984  792122 retry.go:31] will retry after 475.282466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:27.194195  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:27.194290  792122 retry.go:31] will retry after 744.668813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:27.567465  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0210 11:11:27.642188  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:27.642221  792122 retry.go:31] will retry after 1.775042521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:27.940140  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:11:27.965531  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:28.010931  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0210 11:11:28.036378  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:28.036414  792122 retry.go:31] will retry after 830.931937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:28.081898  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:28.081934  792122 retry.go:31] will retry after 1.127697549s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:28.117427  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:28.117462  792122 retry.go:31] will retry after 1.115774173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:28.868310  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0210 11:11:28.950510  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:28.950542  792122 retry.go:31] will retry after 2.371448727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:29.209968  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:29.234273  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0210 11:11:29.299885  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:29.299920  792122 retry.go:31] will retry after 2.749982384s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:29.338478  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:29.338513  792122 retry.go:31] will retry after 957.280972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:29.362077  792122 node_ready.go:53] error getting node "old-k8s-version-705847": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-705847": dial tcp 192.168.76.2:8443: connect: connection refused
	I0210 11:11:29.418394  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0210 11:11:29.490524  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:29.490561  792122 retry.go:31] will retry after 2.694787037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:30.296009  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0210 11:11:30.402862  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:30.402939  792122 retry.go:31] will retry after 2.613930879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:31.322578  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:11:31.362226  792122 node_ready.go:53] error getting node "old-k8s-version-705847": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-705847": dial tcp 192.168.76.2:8443: connect: connection refused
	W0210 11:11:31.448771  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:31.448800  792122 retry.go:31] will retry after 3.556165586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:32.050740  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:32.186145  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0210 11:11:32.286620  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:32.286649  792122 retry.go:31] will retry after 3.688898467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0210 11:11:32.344854  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:32.344886  792122 retry.go:31] will retry after 2.718862749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:33.017391  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0210 11:11:33.304223  792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:33.304253  792122 retry.go:31] will retry after 5.591745868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0210 11:11:35.006023  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0210 11:11:35.063907  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:11:35.975696  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 11:11:38.897639  792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:11:42.497806  792122 node_ready.go:49] node "old-k8s-version-705847" has status "Ready":"True"
	I0210 11:11:42.497829  792122 node_ready.go:38] duration metric: took 17.636742451s for node "old-k8s-version-705847" to be "Ready" ...
	I0210 11:11:42.497841  792122 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:11:42.570572  792122 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-7fkgl" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:42.790395  792122 pod_ready.go:93] pod "coredns-74ff55c5b-7fkgl" in "kube-system" namespace has status "Ready":"True"
	I0210 11:11:42.790465  792122 pod_ready.go:82] duration metric: took 219.813767ms for pod "coredns-74ff55c5b-7fkgl" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:42.790492  792122 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:42.876797  792122 pod_ready.go:93] pod "etcd-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
	I0210 11:11:42.876869  792122 pod_ready.go:82] duration metric: took 86.355801ms for pod "etcd-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:42.876899  792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:42.888623  792122 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
	I0210 11:11:42.888697  792122 pod_ready.go:82] duration metric: took 11.777178ms for pod "kube-apiserver-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:42.888725  792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:11:43.959052  792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.952972726s)
	I0210 11:11:43.959299  792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.895364656s)
	I0210 11:11:43.959418  792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.983690912s)
	I0210 11:11:43.959459  792122 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-705847"
	I0210 11:11:43.959512  792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.061849816s)
	I0210 11:11:43.963300  792122 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-705847 addons enable metrics-server
	
	I0210 11:11:43.967878  792122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0210 11:11:43.970836  792122 addons.go:514] duration metric: took 19.521343547s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0210 11:11:44.893807  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:11:46.897527  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:11:49.394509  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:11:51.894013  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:11:53.895036  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:11:55.904815  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:11:58.393438  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:00.416706  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:02.894788  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:04.896093  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:06.896817  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:09.395491  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:11.894043  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:13.894340  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:16.394012  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:18.394251  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:20.894790  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:22.897293  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:25.394833  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:27.394878  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:29.396928  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:31.894532  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:34.394217  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:36.395032  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:38.396249  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:40.894450  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:42.901194  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:45.395423  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:47.895201  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:50.394554  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:52.894148  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:54.895047  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:56.895251  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:12:59.394365  792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:00.419446  792122 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
	I0210 11:13:00.419487  792122 pod_ready.go:82] duration metric: took 1m17.530741501s for pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:13:00.419505  792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qt8rk" in "kube-system" namespace to be "Ready" ...
	I0210 11:13:00.425620  792122 pod_ready.go:93] pod "kube-proxy-qt8rk" in "kube-system" namespace has status "Ready":"True"
	I0210 11:13:00.425648  792122 pod_ready.go:82] duration metric: took 6.132546ms for pod "kube-proxy-qt8rk" in "kube-system" namespace to be "Ready" ...
	I0210 11:13:00.425662  792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:13:02.431657  792122 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:04.931083  792122 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:06.931948  792122 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:09.430693  792122 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
	I0210 11:13:09.430718  792122 pod_ready.go:82] duration metric: took 9.005047393s for pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
	I0210 11:13:09.430731  792122 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace to be "Ready" ...
	I0210 11:13:11.436362  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:13.436582  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:15.936385  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:17.936877  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:20.435894  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:22.436003  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:24.436351  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:26.936498  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:29.436682  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:31.936793  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:33.937359  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:36.437798  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:38.936440  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:40.937292  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:43.436209  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:45.436616  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:47.937193  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:50.436291  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:52.436723  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:54.936588  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:57.435458  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:13:59.436657  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:01.936887  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:03.937487  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:06.436671  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:08.941715  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:11.436959  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:13.497480  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:15.937941  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:18.436313  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:20.935995  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:22.936126  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:25.436984  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:27.936659  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:30.436634  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:32.436812  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:34.936971  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:37.437007  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:39.437145  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:41.935991  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:44.437013  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:46.437311  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:48.936228  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:50.936540  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:52.937112  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:55.436400  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:57.936329  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:14:59.936423  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:01.936918  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:03.962190  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:06.436269  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:08.436478  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:10.939425  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:13.438546  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:15.936461  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:17.937109  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:20.436649  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:22.936257  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:24.936644  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:26.936928  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:29.436313  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:31.936767  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:34.435797  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:36.436776  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:38.437350  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:40.936694  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:43.437290  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:45.936133  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:48.436279  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:50.436488  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:52.937555  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:55.436217  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:15:57.936407  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:00.446045  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:02.936059  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:04.936115  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:06.936974  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:09.436519  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:11.936908  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:14.436578  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:16.436983  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:18.936240  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:20.936406  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:22.936754  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:25.437681  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:27.935599  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:29.937100  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:31.948008  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:34.436184  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:36.436756  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:38.936032  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:40.936524  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:43.436272  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:45.437061  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:47.938001  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:49.953799  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:52.436865  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:54.990857  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:57.436603  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:16:59.936464  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:01.945370  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:04.444285  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:06.951299  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:09.437288  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:09.437357  792122 pod_ready.go:82] duration metric: took 4m0.006581256s for pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace to be "Ready" ...
	E0210 11:17:09.437376  792122 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0210 11:17:09.437385  792122 pod_ready.go:39] duration metric: took 5m26.939532937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:17:09.437403  792122 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:17:09.437440  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:17:09.437540  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:17:09.487231  792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:09.487255  792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:09.487276  792122 cri.go:89] found id: ""
	I0210 11:17:09.487283  792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
	I0210 11:17:09.487345  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.491577  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.495484  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0210 11:17:09.495557  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:17:09.544535  792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:09.544558  792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:09.544563  792122 cri.go:89] found id: ""
	I0210 11:17:09.544570  792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
	I0210 11:17:09.544628  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.548930  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.552295  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0210 11:17:09.552365  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:17:09.604781  792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:09.604800  792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:09.604806  792122 cri.go:89] found id: ""
	I0210 11:17:09.604812  792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
	I0210 11:17:09.604866  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.608845  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.613042  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:17:09.613164  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:17:09.658259  792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:09.658335  792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:09.658389  792122 cri.go:89] found id: ""
	I0210 11:17:09.658414  792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
	I0210 11:17:09.658491  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.662928  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.666904  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:17:09.667021  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:17:09.714379  792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:09.714455  792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:09.714475  792122 cri.go:89] found id: ""
	I0210 11:17:09.714502  792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
	I0210 11:17:09.714574  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.718758  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.722517  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:17:09.722636  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:17:09.771474  792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:09.771545  792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:09.771565  792122 cri.go:89] found id: ""
	I0210 11:17:09.771588  792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
	I0210 11:17:09.771661  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.775353  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.779153  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0210 11:17:09.779273  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:17:09.825744  792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:09.825818  792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:09.825838  792122 cri.go:89] found id: ""
	I0210 11:17:09.825861  792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
	I0210 11:17:09.825933  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.829905  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.833685  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0210 11:17:09.833803  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 11:17:09.880184  792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:09.880260  792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:09.880279  792122 cri.go:89] found id: ""
	I0210 11:17:09.880303  792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
	I0210 11:17:09.880385  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.884665  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.888489  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:17:09.888609  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:17:09.933140  792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:09.933213  792122 cri.go:89] found id: ""
	I0210 11:17:09.933235  792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
	I0210 11:17:09.933325  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.937792  792122 logs.go:123] Gathering logs for dmesg ...
	I0210 11:17:09.937862  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:17:09.973568  792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
	I0210 11:17:09.973650  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:10.088454  792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
	I0210 11:17:10.088500  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:10.153844  792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
	I0210 11:17:10.153874  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:10.260745  792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
	I0210 11:17:10.260782  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:10.321419  792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
	I0210 11:17:10.321451  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:10.397177  792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
	I0210 11:17:10.397207  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:10.462194  792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
	I0210 11:17:10.462224  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:10.528776  792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
	I0210 11:17:10.528803  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:10.574450  792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
	I0210 11:17:10.574521  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:10.652275  792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
	I0210 11:17:10.652360  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:10.716278  792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
	I0210 11:17:10.716454  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:10.765251  792122 logs.go:123] Gathering logs for kubelet ...
	I0210 11:17:10.765318  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:17:10.826952  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827210  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065     665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827464  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827695  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738     665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827929  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993     665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.828154  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.828396  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486     665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.828635  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.835472  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.835682  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.839280  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.841560  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.841923  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.842131  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.842823  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.843281  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500     665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
	W0210 11:17:10.844228  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.846762  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.847253  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.847462  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.847813  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.848022  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.848632  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.848983  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.849189  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.849553  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.852128  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.852482  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.852710  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.853073  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.853287  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.853938  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.854291  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.854509  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.854876  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.855096  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.855567  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.855778  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.856158  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.856381  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.856738  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.859236  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.859591  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.859943  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.864467  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.865098  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.865311  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.865688  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.865901  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.866255  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.866470  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.866821  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.867030  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.867413  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.867627  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.868007  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.868230  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.868594  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.868802  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.869151  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.869370  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.869737  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.869948  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.870300  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	I0210 11:17:10.870323  792122 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:17:10.870349  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 11:17:11.085761  792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
	I0210 11:17:11.085801  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:11.150903  792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
	I0210 11:17:11.150935  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:11.209154  792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
	I0210 11:17:11.209226  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:11.266387  792122 logs.go:123] Gathering logs for container status ...
	I0210 11:17:11.266414  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:17:11.330870  792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
	I0210 11:17:11.330958  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:11.456996  792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
	I0210 11:17:11.457085  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:11.505129  792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
	I0210 11:17:11.505201  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:11.562557  792122 logs.go:123] Gathering logs for containerd ...
	I0210 11:17:11.562640  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0210 11:17:11.629917  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:11.629991  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0210 11:17:11.630090  792122 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0210 11:17:11.630134  792122 out.go:270]   Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	  Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:11.630302  792122 out.go:270]   Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:11.630335  792122 out.go:270]   Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	  Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:11.630379  792122 out.go:270]   Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:11.630431  792122 out.go:270]   Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	  Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	I0210 11:17:11.630463  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:11.630494  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:17:21.633459  792122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:17:21.646627  792122 api_server.go:72] duration metric: took 5m57.197162359s to wait for apiserver process to appear ...
	I0210 11:17:21.646652  792122 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:17:21.646689  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:17:21.646747  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:17:21.702943  792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:21.702968  792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:21.702974  792122 cri.go:89] found id: ""
	I0210 11:17:21.702981  792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
	I0210 11:17:21.703043  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.706808  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.711614  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0210 11:17:21.711686  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:17:21.769142  792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:21.769166  792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:21.769171  792122 cri.go:89] found id: ""
	I0210 11:17:21.769178  792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
	I0210 11:17:21.769231  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.772814  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.776371  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0210 11:17:21.776467  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:17:21.835068  792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:21.835099  792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:21.835105  792122 cri.go:89] found id: ""
	I0210 11:17:21.835112  792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
	I0210 11:17:21.835205  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.839601  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.843809  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:17:21.843906  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:17:21.894020  792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:21.894042  792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:21.894047  792122 cri.go:89] found id: ""
	I0210 11:17:21.894054  792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
	I0210 11:17:21.894151  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.898071  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.902515  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:17:21.902616  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:17:21.980105  792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:21.980138  792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:21.980144  792122 cri.go:89] found id: ""
	I0210 11:17:21.980151  792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
	I0210 11:17:21.980235  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.984322  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.987666  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:17:21.987780  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:17:22.059620  792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:22.059644  792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:22.059649  792122 cri.go:89] found id: ""
	I0210 11:17:22.059658  792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
	I0210 11:17:22.059744  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.063872  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.067934  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0210 11:17:22.068028  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:17:22.120294  792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:22.120314  792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:22.120319  792122 cri.go:89] found id: ""
	I0210 11:17:22.120326  792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
	I0210 11:17:22.120379  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.124012  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.133616  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0210 11:17:22.133685  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 11:17:22.193873  792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:22.193892  792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:22.193897  792122 cri.go:89] found id: ""
	I0210 11:17:22.193904  792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
	I0210 11:17:22.193959  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.197703  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.201260  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:17:22.201380  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:17:22.252446  792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:22.252510  792122 cri.go:89] found id: ""
	I0210 11:17:22.252533  792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
	I0210 11:17:22.252606  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.256456  792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
	I0210 11:17:22.256522  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:22.319127  792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
	I0210 11:17:22.319197  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:22.371929  792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
	I0210 11:17:22.371996  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:22.419946  792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
	I0210 11:17:22.420016  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:22.500193  792122 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:17:22.500219  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 11:17:22.689017  792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
	I0210 11:17:22.689049  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:22.771016  792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
	I0210 11:17:22.771047  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:22.833433  792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
	I0210 11:17:22.833464  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:22.899680  792122 logs.go:123] Gathering logs for containerd ...
	I0210 11:17:22.899757  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0210 11:17:22.995820  792122 logs.go:123] Gathering logs for dmesg ...
	I0210 11:17:22.995915  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:17:23.022911  792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
	I0210 11:17:23.022939  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:23.088083  792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
	I0210 11:17:23.088257  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:23.173762  792122 logs.go:123] Gathering logs for container status ...
	I0210 11:17:23.173836  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:17:23.230605  792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
	I0210 11:17:23.230682  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:23.306650  792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
	I0210 11:17:23.306724  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:23.388460  792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
	I0210 11:17:23.388501  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:23.442850  792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
	I0210 11:17:23.442879  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:23.569314  792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
	I0210 11:17:23.569354  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:23.623310  792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
	I0210 11:17:23.623338  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:23.669314  792122 logs.go:123] Gathering logs for kubelet ...
	I0210 11:17:23.669343  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:17:23.735791  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736086  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065     665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736428  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736690  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738     665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736907  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993     665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.737110  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.737331  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486     665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.737557  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.744448  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.744641  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.748240  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.750410  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.750747  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.750932  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.751597  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.752034  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500     665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
	W0210 11:17:23.752959  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.755482  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.755947  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.756133  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.756462  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.756668  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.757257  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.757662  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.757866  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.758208  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.760718  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.761078  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.761277  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.761645  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.761831  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.762418  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.762750  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.762936  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.763264  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.763453  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.763802  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.763989  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.764385  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.764574  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.764916  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.767429  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.767825  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.768160  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.768346  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.768960  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.769151  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.769564  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.769768  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.770114  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.770306  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.770642  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.770826  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.771154  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.771340  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.771666  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.771864  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.772192  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.772377  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.772703  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.772887  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.773213  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.773398  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.773731  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.774137  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0210 11:17:23.774168  792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
	I0210 11:17:23.774184  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:23.845922  792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
	I0210 11:17:23.845950  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:23.939309  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:23.939399  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0210 11:17:23.939494  792122 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0210 11:17:23.939681  792122 out.go:270]   Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.939739  792122 out.go:270]   Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	  Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.939779  792122 out.go:270]   Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.939835  792122 out.go:270]   Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	  Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.939870  792122 out.go:270]   Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0210 11:17:23.939920  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:23.939941  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:17:33.941594  792122 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0210 11:17:33.966671  792122 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0210 11:17:33.970082  792122 out.go:201] 
	W0210 11:17:33.973071  792122 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0210 11:17:33.973117  792122 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0210 11:17:33.973146  792122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0210 11:17:33.973158  792122 out.go:270] * 
	* 
	W0210 11:17:33.974109  792122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:17:33.977004  792122 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-705847
helpers_test.go:235: (dbg) docker inspect old-k8s-version-705847:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5",
	        "Created": "2025-02-10T11:08:15.654461625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 792321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-10T11:11:16.381808219Z",
	            "FinishedAt": "2025-02-10T11:11:15.181356464Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/hosts",
	        "LogPath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5-json.log",
	        "Name": "/old-k8s-version-705847",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-705847:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-705847",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67-init/diff:/var/lib/docker/overlay2/26239c014af6c1ba34d676e86726c37031bac25f65804c44ae4f8df935bea840/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-705847",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-705847/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-705847",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-705847",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-705847",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d29c017b5e755a03ffcce91a174ec923f04dc63dd87e46edaf834f84250587b",
	            "SandboxKey": "/var/run/docker/netns/5d29c017b5e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-705847": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "fc44ac08ef1f81a1cabfe5ec2acc66b7f9febc09e6d34d30523f23893af91f16",
	                    "EndpointID": "02bcf43005e65955eab1cc5f9bdb039c8ddafa874db2d40e40f1941e004fe9a9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-705847",
	                        "a745477e05fb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-705847 -n old-k8s-version-705847
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-705847 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-705847 logs -n 25: (3.159324309s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-369393                              | cert-expiration-369393   | jenkins | v1.35.0 | 10 Feb 25 11:06 UTC | 10 Feb 25 11:07 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-962978                               | force-systemd-env-962978 | jenkins | v1.35.0 | 10 Feb 25 11:07 UTC | 10 Feb 25 11:07 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-962978                            | force-systemd-env-962978 | jenkins | v1.35.0 | 10 Feb 25 11:07 UTC | 10 Feb 25 11:07 UTC |
	| start   | -p cert-options-679762                                 | cert-options-679762      | jenkins | v1.35.0 | 10 Feb 25 11:07 UTC | 10 Feb 25 11:08 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-679762 ssh                                | cert-options-679762      | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-679762 -- sudo                         | cert-options-679762      | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-679762                                 | cert-options-679762      | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	| start   | -p old-k8s-version-705847                              | old-k8s-version-705847   | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:10 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-369393                              | cert-expiration-369393   | jenkins | v1.35.0 | 10 Feb 25 11:10 UTC | 10 Feb 25 11:10 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-369393                              | cert-expiration-369393   | jenkins | v1.35.0 | 10 Feb 25 11:10 UTC | 10 Feb 25 11:10 UTC |
	| start   | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:10 UTC | 10 Feb 25 11:11 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-705847        | old-k8s-version-705847   | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | 10 Feb 25 11:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-705847                              | old-k8s-version-705847   | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | 10 Feb 25 11:11 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-705847             | old-k8s-version-705847   | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | 10 Feb 25 11:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-705847                              | old-k8s-version-705847   | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-861376             | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-861376                  | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:16 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| image   | no-preload-861376 image list                           | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
	| delete  | -p no-preload-861376                                   | no-preload-861376        | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
	| start   | -p embed-certs-822142                                  | embed-certs-822142       | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:17:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:17:07.790759  802973 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:17:07.790882  802973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:17:07.790894  802973 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:07.790900  802973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:17:07.791160  802973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 11:17:07.791583  802973 out.go:352] Setting JSON to false
	I0210 11:17:07.792674  802973 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14373,"bootTime":1739171855,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 11:17:07.792754  802973 start.go:139] virtualization:  
	I0210 11:17:07.796772  802973 out.go:177] * [embed-certs-822142] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0210 11:17:07.801121  802973 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:17:07.801324  802973 notify.go:220] Checking for updates...
	I0210 11:17:07.807471  802973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:17:07.810736  802973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 11:17:07.813830  802973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 11:17:07.816854  802973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0210 11:17:07.819810  802973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:17:07.823313  802973 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0210 11:17:07.823443  802973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:17:07.854450  802973 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 11:17:07.854715  802973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 11:17:07.913226  802973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-10 11:17:07.90331121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 11:17:07.913348  802973 docker.go:318] overlay module found
	I0210 11:17:07.916449  802973 out.go:177] * Using the docker driver based on user configuration
	I0210 11:17:07.919271  802973 start.go:297] selected driver: docker
	I0210 11:17:07.919290  802973 start.go:901] validating driver "docker" against <nil>
	I0210 11:17:07.919304  802973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:17:07.920047  802973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 11:17:08.006093  802973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-10 11:17:07.987821154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 11:17:08.006379  802973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 11:17:08.006623  802973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:17:08.009615  802973 out.go:177] * Using Docker driver with root privileges
	I0210 11:17:08.012671  802973 cni.go:84] Creating CNI manager for ""
	I0210 11:17:08.013196  802973 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 11:17:08.013236  802973 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 11:17:08.013378  802973 start.go:340] cluster config:
	{Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:17:08.016735  802973 out.go:177] * Starting "embed-certs-822142" primary control-plane node in "embed-certs-822142" cluster
	I0210 11:17:08.019640  802973 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 11:17:08.022764  802973 out.go:177] * Pulling base image v0.0.46 ...
	I0210 11:17:08.025665  802973 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 11:17:08.025756  802973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0210 11:17:08.025771  802973 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 11:17:08.025789  802973 cache.go:56] Caching tarball of preloaded images
	I0210 11:17:08.025905  802973 preload.go:172] Found /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0210 11:17:08.025917  802973 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0210 11:17:08.026046  802973 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/config.json ...
	I0210 11:17:08.026102  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/config.json: {Name:mkcf3cebecc98801d43dfd996a72ac5ae7403fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:08.047371  802973 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0210 11:17:08.047398  802973 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0210 11:17:08.047419  802973 cache.go:230] Successfully downloaded all kic artifacts
	I0210 11:17:08.047453  802973 start.go:360] acquireMachinesLock for embed-certs-822142: {Name:mk8e9768e203098d1ff183e3ceae266c8926e0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:17:08.047579  802973 start.go:364] duration metric: took 102.517µs to acquireMachinesLock for "embed-certs-822142"
	I0210 11:17:08.047613  802973 start.go:93] Provisioning new machine with config: &{Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0210 11:17:08.047686  802973 start.go:125] createHost starting for "" (driver="docker")
	I0210 11:17:06.951299  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:09.437288  792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
	I0210 11:17:09.437357  792122 pod_ready.go:82] duration metric: took 4m0.006581256s for pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace to be "Ready" ...
	E0210 11:17:09.437376  792122 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0210 11:17:09.437385  792122 pod_ready.go:39] duration metric: took 5m26.939532937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:17:09.437403  792122 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:17:09.437440  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:17:09.437540  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:17:09.487231  792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:09.487255  792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:09.487276  792122 cri.go:89] found id: ""
	I0210 11:17:09.487283  792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
	I0210 11:17:09.487345  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.491577  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.495484  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0210 11:17:09.495557  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:17:09.544535  792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:09.544558  792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:09.544563  792122 cri.go:89] found id: ""
	I0210 11:17:09.544570  792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
	I0210 11:17:09.544628  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.548930  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.552295  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0210 11:17:09.552365  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:17:09.604781  792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:09.604800  792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:09.604806  792122 cri.go:89] found id: ""
	I0210 11:17:09.604812  792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
	I0210 11:17:09.604866  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.608845  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.613042  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:17:09.613164  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:17:09.658259  792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:09.658335  792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:09.658389  792122 cri.go:89] found id: ""
	I0210 11:17:09.658414  792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
	I0210 11:17:09.658491  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.662928  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.666904  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:17:09.667021  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:17:09.714379  792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:09.714455  792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:09.714475  792122 cri.go:89] found id: ""
	I0210 11:17:09.714502  792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
	I0210 11:17:09.714574  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.718758  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.722517  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:17:09.722636  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:17:09.771474  792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:09.771545  792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:09.771565  792122 cri.go:89] found id: ""
	I0210 11:17:09.771588  792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
	I0210 11:17:09.771661  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.775353  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.779153  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0210 11:17:09.779273  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:17:09.825744  792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:09.825818  792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:09.825838  792122 cri.go:89] found id: ""
	I0210 11:17:09.825861  792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
	I0210 11:17:09.825933  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.829905  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.833685  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0210 11:17:09.833803  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 11:17:09.880184  792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:09.880260  792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:09.880279  792122 cri.go:89] found id: ""
	I0210 11:17:09.880303  792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
	I0210 11:17:09.880385  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.884665  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.888489  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:17:09.888609  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:17:09.933140  792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:09.933213  792122 cri.go:89] found id: ""
	I0210 11:17:09.933235  792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
	I0210 11:17:09.933325  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:09.937792  792122 logs.go:123] Gathering logs for dmesg ...
	I0210 11:17:09.937862  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:17:09.973568  792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
	I0210 11:17:09.973650  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:10.088454  792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
	I0210 11:17:10.088500  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:10.153844  792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
	I0210 11:17:10.153874  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:10.260745  792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
	I0210 11:17:10.260782  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:10.321419  792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
	I0210 11:17:10.321451  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:10.397177  792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
	I0210 11:17:10.397207  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:10.462194  792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
	I0210 11:17:10.462224  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:10.528776  792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
	I0210 11:17:10.528803  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:10.574450  792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
	I0210 11:17:10.574521  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:10.652275  792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
	I0210 11:17:10.652360  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:10.716278  792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
	I0210 11:17:10.716454  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:10.765251  792122 logs.go:123] Gathering logs for kubelet ...
	I0210 11:17:10.765318  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:17:10.826952  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827210  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065     665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827464  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827695  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738     665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.827929  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993     665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.828154  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.828396  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486     665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.828635  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:10.835472  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.835682  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.839280  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.841560  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.841923  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.842131  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.842823  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.843281  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500     665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
	W0210 11:17:10.844228  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.846762  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.847253  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.847462  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.847813  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.848022  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.848632  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.848983  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.849189  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.849553  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.852128  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.852482  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.852710  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.853073  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.853287  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.853938  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.854291  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.854509  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.854876  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	I0210 11:17:08.051215  802973 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0210 11:17:08.051522  802973 start.go:159] libmachine.API.Create for "embed-certs-822142" (driver="docker")
	I0210 11:17:08.051569  802973 client.go:168] LocalClient.Create starting
	I0210 11:17:08.051638  802973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem
	I0210 11:17:08.051680  802973 main.go:141] libmachine: Decoding PEM data...
	I0210 11:17:08.051698  802973 main.go:141] libmachine: Parsing certificate...
	I0210 11:17:08.051761  802973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem
	I0210 11:17:08.051782  802973 main.go:141] libmachine: Decoding PEM data...
	I0210 11:17:08.051801  802973 main.go:141] libmachine: Parsing certificate...
	I0210 11:17:08.052246  802973 cli_runner.go:164] Run: docker network inspect embed-certs-822142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0210 11:17:08.070709  802973 cli_runner.go:211] docker network inspect embed-certs-822142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0210 11:17:08.070809  802973 network_create.go:284] running [docker network inspect embed-certs-822142] to gather additional debugging logs...
	I0210 11:17:08.070864  802973 cli_runner.go:164] Run: docker network inspect embed-certs-822142
	W0210 11:17:08.090054  802973 cli_runner.go:211] docker network inspect embed-certs-822142 returned with exit code 1
	I0210 11:17:08.090104  802973 network_create.go:287] error running [docker network inspect embed-certs-822142]: docker network inspect embed-certs-822142: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-822142 not found
	I0210 11:17:08.090123  802973 network_create.go:289] output of [docker network inspect embed-certs-822142]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-822142 not found
	
	** /stderr **
	I0210 11:17:08.090233  802973 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 11:17:08.108019  802973 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37f7c82b9b3f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2a:78:ce:04} reservation:<nil>}
	I0210 11:17:08.108521  802973 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1f232eef2a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:47:fc:c8:24} reservation:<nil>}
	I0210 11:17:08.109080  802973 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e1b5d2238101 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d6:27:32:a1} reservation:<nil>}
	I0210 11:17:08.109593  802973 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fc44ac08ef1f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:8a:ec:dc:83} reservation:<nil>}
	I0210 11:17:08.110202  802973 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a22750}
	I0210 11:17:08.110232  802973 network_create.go:124] attempt to create docker network embed-certs-822142 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0210 11:17:08.110303  802973 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-822142 embed-certs-822142
	I0210 11:17:08.200418  802973 network_create.go:108] docker network embed-certs-822142 192.168.85.0/24 created
	I0210 11:17:08.200451  802973 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-822142" container
	I0210 11:17:08.200524  802973 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0210 11:17:08.217784  802973 cli_runner.go:164] Run: docker volume create embed-certs-822142 --label name.minikube.sigs.k8s.io=embed-certs-822142 --label created_by.minikube.sigs.k8s.io=true
	I0210 11:17:08.237235  802973 oci.go:103] Successfully created a docker volume embed-certs-822142
	I0210 11:17:08.237365  802973 cli_runner.go:164] Run: docker run --rm --name embed-certs-822142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-822142 --entrypoint /usr/bin/test -v embed-certs-822142:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0210 11:17:08.913221  802973 oci.go:107] Successfully prepared a docker volume embed-certs-822142
	I0210 11:17:08.913273  802973 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 11:17:08.913294  802973 kic.go:194] Starting extracting preloaded images to volume ...
	I0210 11:17:08.913371  802973 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-822142:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	W0210 11:17:10.855096  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.855567  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.855778  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.856158  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.856381  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.856738  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.859236  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:10.859591  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.859943  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.864467  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.865098  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.865311  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.865688  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.865901  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.866255  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.866470  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.866821  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.867030  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.867413  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.867627  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.868007  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.868230  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.868594  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.868802  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.869151  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.869370  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.869737  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:10.869948  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:10.870300  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	I0210 11:17:10.870323  792122 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:17:10.870349  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 11:17:11.085761  792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
	I0210 11:17:11.085801  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:11.150903  792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
	I0210 11:17:11.150935  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:11.209154  792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
	I0210 11:17:11.209226  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:11.266387  792122 logs.go:123] Gathering logs for container status ...
	I0210 11:17:11.266414  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:17:11.330870  792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
	I0210 11:17:11.330958  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:11.456996  792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
	I0210 11:17:11.457085  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:11.505129  792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
	I0210 11:17:11.505201  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:11.562557  792122 logs.go:123] Gathering logs for containerd ...
	I0210 11:17:11.562640  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0210 11:17:11.629917  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:11.629991  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0210 11:17:11.630090  792122 out.go:270] X Problems detected in kubelet:
	W0210 11:17:11.630134  792122 out.go:270]   Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:11.630302  792122 out.go:270]   Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:11.630335  792122 out.go:270]   Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:11.630379  792122 out.go:270]   Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:11.630431  792122 out.go:270]   Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	I0210 11:17:11.630463  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:11.630494  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:17:14.537459  802973 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-822142:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (5.624049494s)
	I0210 11:17:14.537572  802973 kic.go:203] duration metric: took 5.624205943s to extract preloaded images to volume ...
	W0210 11:17:14.537710  802973 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0210 11:17:14.537830  802973 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0210 11:17:14.587765  802973 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-822142 --name embed-certs-822142 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-822142 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-822142 --network embed-certs-822142 --ip 192.168.85.2 --volume embed-certs-822142:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0210 11:17:14.948338  802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Running}}
	I0210 11:17:14.969176  802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Status}}
	I0210 11:17:14.989868  802973 cli_runner.go:164] Run: docker exec embed-certs-822142 stat /var/lib/dpkg/alternatives/iptables
	I0210 11:17:15.056747  802973 oci.go:144] the created container "embed-certs-822142" has a running status.
	I0210 11:17:15.056788  802973 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa...
	I0210 11:17:15.243142  802973 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0210 11:17:15.273892  802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Status}}
	I0210 11:17:15.297399  802973 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0210 11:17:15.297422  802973 kic_runner.go:114] Args: [docker exec --privileged embed-certs-822142 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0210 11:17:15.366262  802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Status}}
	I0210 11:17:15.391371  802973 machine.go:93] provisionDockerMachine start ...
	I0210 11:17:15.391461  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:15.421342  802973 main.go:141] libmachine: Using SSH client type: native
	I0210 11:17:15.421665  802973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil>  [] 0s} 127.0.0.1 33808 <nil> <nil>}
	I0210 11:17:15.421686  802973 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:17:15.429700  802973 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0210 11:17:18.556944  802973 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-822142
	
	I0210 11:17:18.556969  802973 ubuntu.go:169] provisioning hostname "embed-certs-822142"
	I0210 11:17:18.557049  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:18.574796  802973 main.go:141] libmachine: Using SSH client type: native
	I0210 11:17:18.576887  802973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil>  [] 0s} 127.0.0.1 33808 <nil> <nil>}
	I0210 11:17:18.576918  802973 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-822142 && echo "embed-certs-822142" | sudo tee /etc/hostname
	I0210 11:17:18.714590  802973 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-822142
	
	I0210 11:17:18.714668  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:18.733236  802973 main.go:141] libmachine: Using SSH client type: native
	I0210 11:17:18.733551  802973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil>  [] 0s} 127.0.0.1 33808 <nil> <nil>}
	I0210 11:17:18.733569  802973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-822142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-822142/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-822142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:17:18.857672  802973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:17:18.857766  802973 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20385-576242/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-576242/.minikube}
	I0210 11:17:18.857801  802973 ubuntu.go:177] setting up certificates
	I0210 11:17:18.857835  802973 provision.go:84] configureAuth start
	I0210 11:17:18.857920  802973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-822142
	I0210 11:17:18.874240  802973 provision.go:143] copyHostCerts
	I0210 11:17:18.874306  802973 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem, removing ...
	I0210 11:17:18.874319  802973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem
	I0210 11:17:18.874395  802973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem (1078 bytes)
	I0210 11:17:18.874498  802973 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem, removing ...
	I0210 11:17:18.874510  802973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem
	I0210 11:17:18.874545  802973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem (1123 bytes)
	I0210 11:17:18.874613  802973 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem, removing ...
	I0210 11:17:18.874626  802973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem
	I0210 11:17:18.874652  802973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem (1679 bytes)
	I0210 11:17:18.874714  802973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-822142 san=[127.0.0.1 192.168.85.2 embed-certs-822142 localhost minikube]
	I0210 11:17:20.464640  802973 provision.go:177] copyRemoteCerts
	I0210 11:17:20.464758  802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:17:20.464863  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:20.483058  802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
	I0210 11:17:20.574272  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0210 11:17:20.599120  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0210 11:17:20.624908  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:17:20.650939  802973 provision.go:87] duration metric: took 1.793072901s to configureAuth
	I0210 11:17:20.651006  802973 ubuntu.go:193] setting minikube options for container-runtime
	I0210 11:17:20.651198  802973 config.go:182] Loaded profile config "embed-certs-822142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 11:17:20.651214  802973 machine.go:96] duration metric: took 5.259824852s to provisionDockerMachine
	I0210 11:17:20.651222  802973 client.go:171] duration metric: took 12.599644812s to LocalClient.Create
	I0210 11:17:20.651237  802973 start.go:167] duration metric: took 12.59971655s to libmachine.API.Create "embed-certs-822142"
	I0210 11:17:20.651244  802973 start.go:293] postStartSetup for "embed-certs-822142" (driver="docker")
	I0210 11:17:20.651253  802973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:17:20.651310  802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:17:20.651366  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:20.668300  802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
	I0210 11:17:20.759359  802973 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:17:20.762884  802973 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0210 11:17:20.762964  802973 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0210 11:17:20.762980  802973 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0210 11:17:20.762989  802973 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0210 11:17:20.763003  802973 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/addons for local assets ...
	I0210 11:17:20.763074  802973 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/files for local assets ...
	I0210 11:17:20.763160  802973 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem -> 5816292.pem in /etc/ssl/certs
	I0210 11:17:20.763274  802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:17:20.772370  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /etc/ssl/certs/5816292.pem (1708 bytes)
	I0210 11:17:20.805302  802973 start.go:296] duration metric: took 154.043074ms for postStartSetup
	I0210 11:17:20.805727  802973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-822142
	I0210 11:17:20.822895  802973 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/config.json ...
	I0210 11:17:20.823186  802973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:17:20.823242  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:20.842298  802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
	I0210 11:17:20.935187  802973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0210 11:17:20.942650  802973 start.go:128] duration metric: took 12.894947282s to createHost
	I0210 11:17:20.942677  802973 start.go:83] releasing machines lock for "embed-certs-822142", held for 12.895083047s
	I0210 11:17:20.942752  802973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-822142
	I0210 11:17:20.960968  802973 ssh_runner.go:195] Run: cat /version.json
	I0210 11:17:20.961035  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:20.961285  802973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 11:17:20.961341  802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
	I0210 11:17:20.983437  802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
	I0210 11:17:21.001874  802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
	I0210 11:17:21.207532  802973 ssh_runner.go:195] Run: systemctl --version
	I0210 11:17:21.212000  802973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:17:21.216353  802973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0210 11:17:21.241271  802973 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0210 11:17:21.241363  802973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:17:21.273628  802973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0210 11:17:21.273651  802973 start.go:495] detecting cgroup driver to use...
	I0210 11:17:21.273686  802973 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0210 11:17:21.273756  802973 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:17:21.288378  802973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:17:21.300136  802973 docker.go:217] disabling cri-docker service (if available) ...
	I0210 11:17:21.300201  802973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 11:17:21.314350  802973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 11:17:21.329171  802973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 11:17:21.424717  802973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 11:17:21.520459  802973 docker.go:233] disabling docker service ...
	I0210 11:17:21.520570  802973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 11:17:21.542235  802973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 11:17:21.554458  802973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 11:17:21.663365  802973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 11:17:21.787765  802973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 11:17:21.800304  802973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:17:21.817081  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 11:17:21.826976  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:17:21.838768  802973 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:17:21.838871  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:17:21.852386  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:17:21.864843  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:17:21.875736  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:17:21.886867  802973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:17:21.898979  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:17:21.911124  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:17:21.922538  802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:17:21.932877  802973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:17:21.949614  802973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:17:21.963841  802973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:17:22.090852  802973 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:17:22.290969  802973 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0210 11:17:22.291094  802973 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0210 11:17:22.295095  802973 start.go:563] Will wait 60s for crictl version
	I0210 11:17:22.295206  802973 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.298662  802973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:17:22.365585  802973 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0210 11:17:22.365696  802973 ssh_runner.go:195] Run: containerd --version
	I0210 11:17:22.395971  802973 ssh_runner.go:195] Run: containerd --version
	I0210 11:17:22.428516  802973 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0210 11:17:22.431522  802973 cli_runner.go:164] Run: docker network inspect embed-certs-822142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 11:17:22.453089  802973 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0210 11:17:22.457029  802973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:17:22.468908  802973 kubeadm.go:883] updating cluster {Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:17:22.469045  802973 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 11:17:22.469115  802973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:17:22.524876  802973 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 11:17:22.524910  802973 containerd.go:534] Images already preloaded, skipping extraction
	I0210 11:17:22.524971  802973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 11:17:22.596641  802973 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 11:17:22.596661  802973 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:17:22.596669  802973 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 containerd true true} ...
	I0210 11:17:22.596774  802973 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-822142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:17:22.596836  802973 ssh_runner.go:195] Run: sudo crictl info
	I0210 11:17:22.651446  802973 cni.go:84] Creating CNI manager for ""
	I0210 11:17:22.651522  802973 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 11:17:22.651546  802973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:17:22.651603  802973 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-822142 NodeName:embed-certs-822142 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:17:22.651773  802973 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-822142"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:17:22.651875  802973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:17:22.662804  802973 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:17:22.662879  802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:17:22.678027  802973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0210 11:17:22.703803  802973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:17:22.726960  802973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0210 11:17:22.750290  802973 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0210 11:17:22.754309  802973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:17:22.766121  802973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:17:22.885745  802973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:17:22.906878  802973 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142 for IP: 192.168.85.2
	I0210 11:17:22.906905  802973 certs.go:194] generating shared ca certs ...
	I0210 11:17:22.906921  802973 certs.go:226] acquiring lock for ca certs: {Name:mk41210dcb5a25827819de2f65fc930debb2adb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:22.907058  802973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key
	I0210 11:17:22.907099  802973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key
	I0210 11:17:22.907106  802973 certs.go:256] generating profile certs ...
	I0210 11:17:22.907160  802973 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.key
	I0210 11:17:22.907172  802973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.crt with IP's: []
	I0210 11:17:23.307894  802973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.crt ...
	I0210 11:17:23.307920  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.crt: {Name:mk8870791c3c3973168792207acd9eb0b2a40a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:23.308866  802973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.key ...
	I0210 11:17:23.308914  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.key: {Name:mk604a037a81a7ea58f5afe10f1a089ed594d3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:23.309082  802973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda
	I0210 11:17:23.309123  802973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0210 11:17:24.339512  802973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda ...
	I0210 11:17:24.339543  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda: {Name:mkd5e2b191a7961291802da6ed354ef008572159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:24.340319  802973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda ...
	I0210 11:17:24.340340  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda: {Name:mk0c36bcdd91c971ea900fe7bc5d35c59eb31924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:24.340441  802973 certs.go:381] copying /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda -> /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt
	I0210 11:17:24.340530  802973 certs.go:385] copying /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda -> /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key
	I0210 11:17:24.340594  802973 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key
	I0210 11:17:24.340613  802973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt with IP's: []
	I0210 11:17:24.743254  802973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt ...
	I0210 11:17:24.743290  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt: {Name:mkb1c131a1fbe9c90e39e56039ffe4956412086c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:24.744118  802973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key ...
	I0210 11:17:24.744136  802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key: {Name:mk0e22431673b3045ab42d984b949c91818da058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:17:24.744339  802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem (1338 bytes)
	W0210 11:17:24.744388  802973 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629_empty.pem, impossibly tiny 0 bytes
	I0210 11:17:24.744403  802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 11:17:24.744430  802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem (1078 bytes)
	I0210 11:17:24.744458  802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem (1123 bytes)
	I0210 11:17:24.744485  802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem (1679 bytes)
	I0210 11:17:24.744531  802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem (1708 bytes)
	I0210 11:17:24.745141  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:17:24.770589  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:17:24.795510  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:17:24.820316  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:17:24.845805  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0210 11:17:24.871079  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:17:24.899257  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:17:24.924247  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:17:24.958799  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem --> /usr/share/ca-certificates/581629.pem (1338 bytes)
	I0210 11:17:24.984660  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /usr/share/ca-certificates/5816292.pem (1708 bytes)
	I0210 11:17:25.016620  802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:17:25.044496  802973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:17:25.063390  802973 ssh_runner.go:195] Run: openssl version
	I0210 11:17:25.069533  802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:17:25.080132  802973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:17:25.084150  802973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:17:25.084253  802973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:17:25.092625  802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:17:25.104621  802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/581629.pem && ln -fs /usr/share/ca-certificates/581629.pem /etc/ssl/certs/581629.pem"
	I0210 11:17:25.114512  802973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/581629.pem
	I0210 11:17:25.118254  802973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:32 /usr/share/ca-certificates/581629.pem
	I0210 11:17:25.118353  802973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/581629.pem
	I0210 11:17:25.125148  802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/581629.pem /etc/ssl/certs/51391683.0"
	I0210 11:17:25.136002  802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5816292.pem && ln -fs /usr/share/ca-certificates/5816292.pem /etc/ssl/certs/5816292.pem"
	I0210 11:17:25.147305  802973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5816292.pem
	I0210 11:17:25.151156  802973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:32 /usr/share/ca-certificates/5816292.pem
	I0210 11:17:25.151256  802973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5816292.pem
	I0210 11:17:25.158821  802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5816292.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:17:25.169927  802973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:17:25.173496  802973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:17:25.173644  802973 kubeadm.go:392] StartCluster: {Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:17:25.173767  802973 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0210 11:17:25.173836  802973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 11:17:25.213573  802973 cri.go:89] found id: ""
	I0210 11:17:25.213650  802973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:17:25.222988  802973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:17:25.232053  802973 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0210 11:17:25.232182  802973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:17:25.242258  802973 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:17:25.242281  802973 kubeadm.go:157] found existing configuration files:
	
	I0210 11:17:25.242353  802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:17:25.251250  802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:17:25.251367  802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:17:25.260016  802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:17:25.269339  802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:17:25.269406  802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:17:25.279339  802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:17:25.291878  802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:17:25.291990  802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:17:25.301917  802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:17:25.312112  802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:17:25.312225  802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:17:25.333270  802973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0210 11:17:25.408443  802973 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 11:17:25.408568  802973 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:17:25.437214  802973 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0210 11:17:25.437350  802973 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0210 11:17:25.437425  802973 kubeadm.go:310] OS: Linux
	I0210 11:17:25.437534  802973 kubeadm.go:310] CGROUPS_CPU: enabled
	I0210 11:17:25.437615  802973 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0210 11:17:25.437701  802973 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0210 11:17:25.437786  802973 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0210 11:17:25.437871  802973 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0210 11:17:25.437953  802973 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0210 11:17:25.438027  802973 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0210 11:17:25.438105  802973 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0210 11:17:25.438187  802973 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0210 11:17:25.509763  802973 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:17:25.509916  802973 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:17:25.510030  802973 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 11:17:25.518002  802973 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:17:21.633459  792122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:17:21.646627  792122 api_server.go:72] duration metric: took 5m57.197162359s to wait for apiserver process to appear ...
	I0210 11:17:21.646652  792122 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:17:21.646689  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0210 11:17:21.646747  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0210 11:17:21.702943  792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:21.702968  792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:21.702974  792122 cri.go:89] found id: ""
	I0210 11:17:21.702981  792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
	I0210 11:17:21.703043  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.706808  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.711614  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0210 11:17:21.711686  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0210 11:17:21.769142  792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:21.769166  792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:21.769171  792122 cri.go:89] found id: ""
	I0210 11:17:21.769178  792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
	I0210 11:17:21.769231  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.772814  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.776371  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0210 11:17:21.776467  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0210 11:17:21.835068  792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:21.835099  792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:21.835105  792122 cri.go:89] found id: ""
	I0210 11:17:21.835112  792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
	I0210 11:17:21.835205  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.839601  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.843809  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0210 11:17:21.843906  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0210 11:17:21.894020  792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:21.894042  792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:21.894047  792122 cri.go:89] found id: ""
	I0210 11:17:21.894054  792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
	I0210 11:17:21.894151  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.898071  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.902515  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0210 11:17:21.902616  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0210 11:17:21.980105  792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:21.980138  792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:21.980144  792122 cri.go:89] found id: ""
	I0210 11:17:21.980151  792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
	I0210 11:17:21.980235  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.984322  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:21.987666  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0210 11:17:21.987780  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0210 11:17:22.059620  792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:22.059644  792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:22.059649  792122 cri.go:89] found id: ""
	I0210 11:17:22.059658  792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
	I0210 11:17:22.059744  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.063872  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.067934  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0210 11:17:22.068028  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0210 11:17:22.120294  792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:22.120314  792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:22.120319  792122 cri.go:89] found id: ""
	I0210 11:17:22.120326  792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
	I0210 11:17:22.120379  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.124012  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.133616  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0210 11:17:22.133685  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0210 11:17:22.193873  792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:22.193892  792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:22.193897  792122 cri.go:89] found id: ""
	I0210 11:17:22.193904  792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
	I0210 11:17:22.193959  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.197703  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.201260  792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0210 11:17:22.201380  792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0210 11:17:22.252446  792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:22.252510  792122 cri.go:89] found id: ""
	I0210 11:17:22.252533  792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
	I0210 11:17:22.252606  792122 ssh_runner.go:195] Run: which crictl
	I0210 11:17:22.256456  792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
	I0210 11:17:22.256522  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
	I0210 11:17:22.319127  792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
	I0210 11:17:22.319197  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
	I0210 11:17:22.371929  792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
	I0210 11:17:22.371996  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
	I0210 11:17:22.419946  792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
	I0210 11:17:22.420016  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
	I0210 11:17:22.500193  792122 logs.go:123] Gathering logs for describe nodes ...
	I0210 11:17:22.500219  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 11:17:22.689017  792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
	I0210 11:17:22.689049  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
	I0210 11:17:22.771016  792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
	I0210 11:17:22.771047  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
	I0210 11:17:22.833433  792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
	I0210 11:17:22.833464  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
	I0210 11:17:22.899680  792122 logs.go:123] Gathering logs for containerd ...
	I0210 11:17:22.899757  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0210 11:17:22.995820  792122 logs.go:123] Gathering logs for dmesg ...
	I0210 11:17:22.995915  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 11:17:23.022911  792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
	I0210 11:17:23.022939  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
	I0210 11:17:23.088083  792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
	I0210 11:17:23.088257  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
	I0210 11:17:23.173762  792122 logs.go:123] Gathering logs for container status ...
	I0210 11:17:23.173836  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 11:17:23.230605  792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
	I0210 11:17:23.230682  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
	I0210 11:17:23.306650  792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
	I0210 11:17:23.306724  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
	I0210 11:17:23.388460  792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
	I0210 11:17:23.388501  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
	I0210 11:17:23.442850  792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
	I0210 11:17:23.442879  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
	I0210 11:17:23.569314  792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
	I0210 11:17:23.569354  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
	I0210 11:17:23.623310  792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
	I0210 11:17:23.623338  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
	I0210 11:17:23.669314  792122 logs.go:123] Gathering logs for kubelet ...
	I0210 11:17:23.669343  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0210 11:17:23.735791  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736086  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065     665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736428  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736690  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738     665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.736907  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993     665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.737110  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.737331  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486     665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.737557  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
	W0210 11:17:23.744448  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.744641  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.748240  792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.750410  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.750747  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.750932  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.751597  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.752034  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500     665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
	W0210 11:17:23.752959  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.755482  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.755947  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.756133  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.756462  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.756668  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.757257  792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.757662  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.757866  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.758208  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.760718  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.761078  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.761277  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.761645  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.761831  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.762418  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.762750  792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.762936  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.763264  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.763453  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.763802  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.763989  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.764385  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.764574  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.764916  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.767429  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0210 11:17:23.767825  792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.768160  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.768346  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.768960  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.769151  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.769564  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.769768  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.770114  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.770306  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.770642  792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.770826  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.771154  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.771340  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.771666  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.771864  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.772192  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.772377  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.772703  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.772887  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.773213  792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.773398  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.773731  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.774137  792122 logs.go:138] Found kubelet problem: Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0210 11:17:23.774168  792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
	I0210 11:17:23.774184  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
	I0210 11:17:23.845922  792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
	I0210 11:17:23.845950  792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
	I0210 11:17:23.939309  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:23.939399  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0210 11:17:23.939494  792122 out.go:270] X Problems detected in kubelet:
	W0210 11:17:23.939681  792122 out.go:270]   Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.939739  792122 out.go:270]   Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.939779  792122 out.go:270]   Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0210 11:17:23.939835  792122 out.go:270]   Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	W0210 11:17:23.939870  792122 out.go:270]   Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0210 11:17:23.939920  792122 out.go:358] Setting ErrFile to fd 2...
	I0210 11:17:23.939941  792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:17:25.524248  802973 out.go:235]   - Generating certificates and keys ...
	I0210 11:17:25.524400  802973 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:17:25.524480  802973 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:17:26.278242  802973 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 11:17:26.978612  802973 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 11:17:27.433595  802973 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 11:17:28.026034  802973 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 11:17:28.243561  802973 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 11:17:28.243900  802973 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-822142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0210 11:17:28.582828  802973 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 11:17:28.583209  802973 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-822142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0210 11:17:29.578654  802973 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 11:17:30.184922  802973 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 11:17:30.470498  802973 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 11:17:30.470801  802973 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:17:31.196190  802973 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:17:32.138487  802973 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 11:17:32.294362  802973 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:17:32.615934  802973 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:17:33.360822  802973 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:17:33.361413  802973 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:17:33.364301  802973 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:17:33.941594  792122 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0210 11:17:33.966671  792122 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0210 11:17:33.970082  792122 out.go:201] 
	W0210 11:17:33.973071  792122 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0210 11:17:33.973117  792122 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0210 11:17:33.973146  792122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0210 11:17:33.973158  792122 out.go:270] * 
	W0210 11:17:33.974109  792122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0210 11:17:33.977004  792122 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b6b6d099aaf4c       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   5071f6b766b05       dashboard-metrics-scraper-8d5bb5db8-r58kw
	b7ef8424fcbcb       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   9808bf553cd91       storage-provisioner
	6c8852ecb1c21       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   89f5de618308e       kubernetes-dashboard-cd95d586-s9bfz
	2517ca7acc440       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   252c890a5cc31       kube-proxy-qt8rk
	221dcab82eb8d       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   9808bf553cd91       storage-provisioner
	23929f63f011f       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   c93197f6b1902       coredns-74ff55c5b-7fkgl
	63daa6ac11e65       e1181ee320546       5 minutes ago       Running             kindnet-cni                 1                   4389549f8ec5e       kindnet-l58wz
	bcea7e59a62ef       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   e5aeb29af6eac       busybox
	2ce24aaa2eea1       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   356a6949169e7       kube-scheduler-old-k8s-version-705847
	aec35b105aa1d       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   6695d802a859f       kube-controller-manager-old-k8s-version-705847
	ad6d38edf5bc8       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   0b14db362d48d       kube-apiserver-old-k8s-version-705847
	4087c4b9c5558       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   6c10fc40769a8       etcd-old-k8s-version-705847
	a664cfa85004d       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ee1e0628d275e       busybox
	a122c6cf80f3c       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   9e0a78abac64a       coredns-74ff55c5b-7fkgl
	9db35ce7df6ab       e1181ee320546       8 minutes ago       Exited              kindnet-cni                 0                   f3753ff4ce47e       kindnet-l58wz
	6d39bdbc1d81b       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   c12baeac0cbf9       kube-proxy-qt8rk
	d49223327cb59       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   5e708f9c1252f       kube-controller-manager-old-k8s-version-705847
	04c0549198596       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   f8ea3c3202fd4       kube-apiserver-old-k8s-version-705847
	3fd7073fac25b       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   27f1abb37b719       etcd-old-k8s-version-705847
	8d3d8d966ae37       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   214ceeb2fd30c       kube-scheduler-old-k8s-version-705847
	
	
	==> containerd <==
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.173940146Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.199089636Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\""
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.199691913Z" level=info msg="StartContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\""
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.269620863Z" level=info msg="StartContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" returns successfully"
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.269710922Z" level=info msg="received exit event container_id:\"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" id:\"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" pid:3046 exit_status:255 exited_at:{seconds:1739186031 nanos:267395570}"
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.295129777Z" level=info msg="shim disconnected" id=5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436 namespace=k8s.io
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.295201423Z" level=warning msg="cleaning up after shim disconnected" id=5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436 namespace=k8s.io
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.295211713Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.898811613Z" level=info msg="RemoveContainer for \"0f075831770b096e8f8915b0d5950ed20018a644c353cf93ace661bad0f72c56\""
	Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.906262996Z" level=info msg="RemoveContainer for \"0f075831770b096e8f8915b0d5950ed20018a644c353cf93ace661bad0f72c56\" returns successfully"
	Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.166373189Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.171964733Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.174007939Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.174108287Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.168644673Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.191117048Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\""
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.191944244Z" level=info msg="StartContainer for \"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\""
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.257124435Z" level=info msg="StartContainer for \"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\" returns successfully"
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.258610499Z" level=info msg="received exit event container_id:\"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\" id:\"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\" pid:3300 exit_status:255 exited_at:{seconds:1739186120 nanos:258326332}"
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.282252961Z" level=info msg="shim disconnected" id=b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6 namespace=k8s.io
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.282314975Z" level=warning msg="cleaning up after shim disconnected" id=b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6 namespace=k8s.io
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.282457390Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.293959895Z" level=warning msg="cleanup warnings time=\"2025-02-10T11:15:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Feb 10 11:15:21 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:21.174400712Z" level=info msg="RemoveContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\""
	Feb 10 11:15:21 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:21.182992741Z" level=info msg="RemoveContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" returns successfully"
	
	
	==> coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34035 - 28803 "HINFO IN 5542394030849349071.1439891449229212650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022263539s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0210 11:12:15.168436       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-10 11:11:45.16787272 +0000 UTC m=+0.048757483) (total time: 30.000457611s):
	Trace[2019727887]: [30.000457611s] [30.000457611s] END
	E0210 11:12:15.168469       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 11:12:15.168816       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-10 11:11:45.168386802 +0000 UTC m=+0.049271565) (total time: 30.000398214s):
	Trace[939984059]: [30.000398214s] [30.000398214s] END
	E0210 11:12:15.168890       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 11:12:15.168839       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-10 11:11:45.168634201 +0000 UTC m=+0.049518981) (total time: 30.000187868s):
	Trace[911902081]: [30.000187868s] [30.000187868s] END
	E0210 11:12:15.168956       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:36861 - 19880 "HINFO IN 8376676006252621669.5710798605214046510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031573941s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-705847
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-705847
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=old-k8s-version-705847
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T11_08_54_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:08:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-705847
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:17:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:17:35 +0000   Mon, 10 Feb 2025 11:08:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:17:35 +0000   Mon, 10 Feb 2025 11:08:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:17:35 +0000   Mon, 10 Feb 2025 11:08:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:17:35 +0000   Mon, 10 Feb 2025 11:09:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-705847
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 6249c507abed417499f16b655cb9a80c
	  System UUID:                8b83af55-3f47-4d24-9d6d-e0877947e999
	  Boot ID:                    562c7f3c-b16a-445a-b1a8-6d6932d5b74d
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 coredns-74ff55c5b-7fkgl                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m27s
	  kube-system                 etcd-old-k8s-version-705847                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m34s
	  kube-system                 kindnet-l58wz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m27s
	  kube-system                 kube-apiserver-old-k8s-version-705847             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-controller-manager-old-k8s-version-705847    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-proxy-qt8rk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-scheduler-old-k8s-version-705847             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 metrics-server-9975d5f86-nvn7z                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m33s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-r58kw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-s9bfz               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m53s (x5 over 8m53s)  kubelet     Node old-k8s-version-705847 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m53s (x5 over 8m53s)  kubelet     Node old-k8s-version-705847 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m53s (x4 over 8m53s)  kubelet     Node old-k8s-version-705847 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m34s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet     Node old-k8s-version-705847 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                  kubelet     Node old-k8s-version-705847 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                  kubelet     Node old-k8s-version-705847 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m27s                  kubelet     Node old-k8s-version-705847 status is now: NodeReady
	  Normal  Starting                 8m26s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m5s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-705847 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-705847 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x7 over 6m4s)    kubelet     Node old-k8s-version-705847 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m50s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] <==
	raft2025/02/10 11:08:44 INFO: ea7e25599daad906 became candidate at term 2
	raft2025/02/10 11:08:44 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2025/02/10 11:08:44 INFO: ea7e25599daad906 became leader at term 2
	raft2025/02/10 11:08:44 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-02-10 11:08:44.909123 I | etcdserver: setting up the initial cluster version to 3.4
	2025-02-10 11:08:44.909450 I | etcdserver: published {Name:old-k8s-version-705847 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-02-10 11:08:44.916656 I | embed: ready to serve client requests
	2025-02-10 11:08:44.917033 I | embed: ready to serve client requests
	2025-02-10 11:08:44.918500 I | embed: serving client requests on 127.0.0.1:2379
	2025-02-10 11:08:44.925626 I | embed: serving client requests on 192.168.76.2:2379
	2025-02-10 11:08:44.936005 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-02-10 11:08:44.940823 I | etcdserver/api: enabled capabilities for version 3.4
	2025-02-10 11:09:05.286622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:09:11.121280 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:09:21.121462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:09:31.121619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:09:41.121327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:09:51.121478 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:10:01.121391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:10:11.121346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:10:21.121333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:10:31.121561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:10:41.121402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:10:51.121337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:11:01.122117 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] <==
	2025-02-10 11:13:27.800019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:13:37.800064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:13:47.800165 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:13:57.799986 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:14:07.800072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:14:17.800100 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:14:27.800071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:14:37.800116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:14:47.799980 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:14:57.800086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:15:07.800046 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:15:17.800128 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:15:27.800041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:15:37.800043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:15:47.800081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:15:57.800181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:16:07.800016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:16:17.800068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:16:27.800169 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:16:37.800069 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:16:47.800010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:16:57.800061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:17:07.800252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:17:17.800008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-02-10 11:17:27.806739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:17:36 up  4:00,  0 users,  load average: 1.53, 1.89, 2.38
	Linux old-k8s-version-705847 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] <==
	I0210 11:15:35.502507       1 main.go:301] handling current node
	I0210 11:15:45.494656       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:15:45.494692       1 main.go:301] handling current node
	I0210 11:15:55.494712       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:15:55.494749       1 main.go:301] handling current node
	I0210 11:16:05.502144       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:16:05.502180       1 main.go:301] handling current node
	I0210 11:16:15.503006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:16:15.503043       1 main.go:301] handling current node
	I0210 11:16:25.500881       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:16:25.500919       1 main.go:301] handling current node
	I0210 11:16:35.501627       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:16:35.501662       1 main.go:301] handling current node
	I0210 11:16:45.494746       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:16:45.494783       1 main.go:301] handling current node
	I0210 11:16:55.501879       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:16:55.501915       1 main.go:301] handling current node
	I0210 11:17:05.501620       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:17:05.501886       1 main.go:301] handling current node
	I0210 11:17:15.502218       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:17:15.502347       1 main.go:301] handling current node
	I0210 11:17:25.501596       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:17:25.501819       1 main.go:301] handling current node
	I0210 11:17:35.509868       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:17:35.510072       1 main.go:301] handling current node
	
	
	==> kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] <==
	I0210 11:09:13.394168       1 controller.go:365] Waiting for informer caches to sync
	I0210 11:09:13.394221       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0210 11:09:13.594367       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0210 11:09:13.594610       1 metrics.go:61] Registering metrics
	I0210 11:09:13.594844       1 controller.go:401] Syncing nftables rules
	I0210 11:09:23.401593       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:09:23.401651       1 main.go:301] handling current node
	I0210 11:09:33.394640       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:09:33.394674       1 main.go:301] handling current node
	I0210 11:09:43.403606       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:09:43.403639       1 main.go:301] handling current node
	I0210 11:09:53.401626       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:09:53.401659       1 main.go:301] handling current node
	I0210 11:10:03.393901       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:10:03.393938       1 main.go:301] handling current node
	I0210 11:10:13.393906       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:10:13.393946       1 main.go:301] handling current node
	I0210 11:10:23.399543       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:10:23.399575       1 main.go:301] handling current node
	I0210 11:10:33.403631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:10:33.403667       1 main.go:301] handling current node
	I0210 11:10:43.401724       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:10:43.401759       1 main.go:301] handling current node
	I0210 11:10:53.396258       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0210 11:10:53.396398       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] <==
	I0210 11:08:51.804278       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0210 11:08:51.804311       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0210 11:08:51.841257       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0210 11:08:51.845901       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0210 11:08:51.845927       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0210 11:08:52.315954       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 11:08:52.368762       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0210 11:08:52.476672       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0210 11:08:52.477844       1 controller.go:606] quota admission added evaluator for: endpoints
	I0210 11:08:52.483410       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 11:08:53.515524       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0210 11:08:54.147393       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0210 11:08:54.241030       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0210 11:09:02.590136       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 11:09:09.498700       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0210 11:09:09.676691       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0210 11:09:22.016589       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:09:22.016637       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:09:22.016668       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0210 11:10:01.784602       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:10:01.784664       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:10:01.784674       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0210 11:10:34.840672       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:10:34.840732       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:10:34.840743       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] <==
	I0210 11:14:23.079183       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:14:23.079257       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0210 11:14:45.070745       1 handler_proxy.go:102] no RequestInfo found in the context
	E0210 11:14:45.071102       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0210 11:14:45.071215       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0210 11:14:55.509715       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:14:55.509763       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:14:55.509796       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0210 11:15:28.022679       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:15:28.022724       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:15:28.022733       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0210 11:16:02.777037       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:16:02.777081       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:16:02.777089       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0210 11:16:39.513398       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:16:39.513444       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:16:39.513455       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0210 11:16:43.513630       1 handler_proxy.go:102] no RequestInfo found in the context
	E0210 11:16:43.513709       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0210 11:16:43.513723       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0210 11:17:11.892638       1 client.go:360] parsed scheme: "passthrough"
	I0210 11:17:11.892684       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0210 11:17:11.892694       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] <==
	E0210 11:13:32.869060       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:13:37.220910       1 request.go:655] Throttling request took 1.048450209s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0210 11:13:38.072352       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:14:03.402906       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:14:09.722886       1 request.go:655] Throttling request took 1.048243678s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0210 11:14:10.574609       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:14:33.904928       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:14:42.178192       1 request.go:655] Throttling request took 1.001625485s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0210 11:14:43.076624       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:15:04.410472       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:15:14.726998       1 request.go:655] Throttling request took 1.048336791s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0210 11:15:15.578520       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:15:34.912309       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:15:47.180578       1 request.go:655] Throttling request took 1.00001746s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0210 11:15:48.080667       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:16:05.414166       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:16:19.731340       1 request.go:655] Throttling request took 1.04839124s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0210 11:16:20.582835       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:16:35.915969       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:16:52.233366       1 request.go:655] Throttling request took 1.048396649s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W0210 11:16:53.084834       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:17:06.417855       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0210 11:17:24.735176       1 request.go:655] Throttling request took 1.048214063s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0210 11:17:25.588134       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0210 11:17:36.919730       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] <==
	I0210 11:09:09.614269       1 shared_informer.go:247] Caches are synced for taint 
	I0210 11:09:09.614363       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0210 11:09:09.614478       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-705847. Assuming now as a timestamp.
	I0210 11:09:09.614559       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0210 11:09:09.614844       1 shared_informer.go:247] Caches are synced for GC 
	I0210 11:09:09.615583       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0210 11:09:09.615779       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0210 11:09:09.616457       1 event.go:291] "Event occurred" object="old-k8s-version-705847" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-705847 event: Registered Node old-k8s-version-705847 in Controller"
	I0210 11:09:09.647292       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0210 11:09:09.710786       1 shared_informer.go:247] Caches are synced for resource quota 
	I0210 11:09:09.716333       1 shared_informer.go:247] Caches are synced for resource quota 
	I0210 11:09:09.722530       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l58wz"
	I0210 11:09:09.728183       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qt8rk"
	E0210 11:09:09.825040       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a0093706-ef42-441b-9f97-68ae7e28fb5f", ResourceVersion:"262", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63874782534, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400138c7a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400138c7c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x400138c7e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001299f80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c
800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400138c860)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400061a540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d6aef8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004c20e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400095cdc8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d6af48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0210 11:09:09.840321       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3d4bd9ea-a394-478f-8f2d-6ff82b5400eb", ResourceVersion:"276", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63874782534, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241212-9f82dd49\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400138c8c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400138c8e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400138c900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c960), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241212-9f82dd49", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400138c980)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400138c9c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400061ac00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d6b168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004c2150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400095cdd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d6b1b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0210 11:09:09.882983       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0210 11:09:09.914498       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3d4bd9ea-a394-478f-8f2d-6ff82b5400eb", ResourceVersion:"416", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63874782534, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241212-9f82dd49\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d9bd00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d9bd20)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d9bd40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d9bd60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001d9bd80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d9bda0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d9bdc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d9bde0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241212-9f82dd49", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d9be00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d9be40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d99320), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001dc8288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004615e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400072b7f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001dc82d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0210 11:09:10.160604       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0210 11:09:10.160626       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0210 11:09:10.184330       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0210 11:09:11.224629       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0210 11:09:11.247535       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bbmfl"
	I0210 11:09:14.614820       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0210 11:11:02.247284       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0210 11:11:02.457821       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] <==
	I0210 11:11:46.563285       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0210 11:11:46.563364       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0210 11:11:46.597454       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0210 11:11:46.597755       1 server_others.go:185] Using iptables Proxier.
	I0210 11:11:46.598118       1 server.go:650] Version: v1.20.0
	I0210 11:11:46.598814       1 config.go:315] Starting service config controller
	I0210 11:11:46.598920       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0210 11:11:46.599047       1 config.go:224] Starting endpoint slice config controller
	I0210 11:11:46.602802       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0210 11:11:46.702921       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0210 11:11:46.702953       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] <==
	I0210 11:09:10.707694       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0210 11:09:10.707793       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0210 11:09:10.755738       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0210 11:09:10.755829       1 server_others.go:185] Using iptables Proxier.
	I0210 11:09:10.756028       1 server.go:650] Version: v1.20.0
	I0210 11:09:10.756521       1 config.go:315] Starting service config controller
	I0210 11:09:10.756533       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0210 11:09:10.759080       1 config.go:224] Starting endpoint slice config controller
	I0210 11:09:10.759092       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0210 11:09:10.856774       1 shared_informer.go:247] Caches are synced for service config 
	I0210 11:09:10.859791       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] <==
	I0210 11:11:37.649387       1 serving.go:331] Generated self-signed cert in-memory
	W0210 11:11:42.296414       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 11:11:42.299527       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 11:11:42.299711       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 11:11:42.299772       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 11:11:42.492809       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0210 11:11:42.495893       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 11:11:42.495910       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 11:11:42.495925       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0210 11:11:42.597631       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] <==
	I0210 11:08:48.714590       1 serving.go:331] Generated self-signed cert in-memory
	W0210 11:08:51.082798       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 11:08:51.083028       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 11:08:51.083170       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 11:08:51.083252       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 11:08:51.138868       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0210 11:08:51.141706       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 11:08:51.141734       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 11:08:51.141755       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0210 11:08:51.166442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0210 11:08:51.167587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 11:08:51.173961       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 11:08:51.174479       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 11:08:51.174638       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 11:08:51.176219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 11:08:51.176368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 11:08:51.177681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0210 11:08:51.185758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 11:08:51.189797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 11:08:51.193329       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 11:08:51.203326       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 11:08:52.026223       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 11:08:52.057637       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 11:08:54.841867       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: I0210 11:15:51.165086     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: I0210 11:16:03.165051     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: I0210 11:16:14.165134     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: I0210 11:16:28.165239     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: I0210 11:16:43.165081     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: I0210 11:16:56.165060     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: I0210 11:17:09.165064     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 10 11:17:24 old-k8s-version-705847 kubelet[665]: I0210 11:17:24.167189     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
	Feb 10 11:17:24 old-k8s-version-705847 kubelet[665]: E0210 11:17:24.168374     665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
	Feb 10 11:17:28 old-k8s-version-705847 kubelet[665]: E0210 11:17:28.165971     665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] <==
	2025/02/10 11:12:08 Using namespace: kubernetes-dashboard
	2025/02/10 11:12:08 Using in-cluster config to connect to apiserver
	2025/02/10 11:12:08 Using secret token for csrf signing
	2025/02/10 11:12:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/10 11:12:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/10 11:12:08 Successful initial request to the apiserver, version: v1.20.0
	2025/02/10 11:12:08 Generating JWE encryption key
	2025/02/10 11:12:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/10 11:12:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/10 11:12:10 Initializing JWE encryption key from synchronized object
	2025/02/10 11:12:10 Creating in-cluster Sidecar client
	2025/02/10 11:12:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:12:10 Serving insecurely on HTTP port: 9090
	2025/02/10 11:12:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:13:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:13:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:14:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:14:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:15:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:16:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:16:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:17:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/10 11:12:08 Starting overwatch
	
	
	==> storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] <==
	I0210 11:11:45.386345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0210 11:12:15.390110       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] <==
	I0210 11:12:28.309062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 11:12:28.333562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 11:12:28.333623       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 11:12:45.867628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 11:12:45.868027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-705847_a73b877e-2d56-419b-9f14-0d434040a716!
	I0210 11:12:45.869712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4dfaaa55-08a0-4e59-9db4-d4e5746b7f58", APIVersion:"v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-705847_a73b877e-2d56-419b-9f14-0d434040a716 became leader
	I0210 11:12:45.969079       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-705847_a73b877e-2d56-419b-9f14-0d434040a716!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-705847 -n old-k8s-version-705847
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-705847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-nvn7z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-705847 describe pod metrics-server-9975d5f86-nvn7z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-705847 describe pod metrics-server-9975d5f86-nvn7z: exit status 1 (187.823649ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-nvn7z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-705847 describe pod metrics-server-9975d5f86-nvn7z: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (383.04s)

                                                
                                    

Test pass (300/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.74
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.14
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.1/json-events 5.23
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.1
18 TestDownloadOnly/v1.32.1/DeleteAll 0.24
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 216.06
29 TestAddons/serial/Volcano 40.1
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.89
35 TestAddons/parallel/Registry 15.44
36 TestAddons/parallel/Ingress 19.83
37 TestAddons/parallel/InspektorGadget 11.8
38 TestAddons/parallel/MetricsServer 5.84
40 TestAddons/parallel/CSI 52.14
41 TestAddons/parallel/Headlamp 16.19
42 TestAddons/parallel/CloudSpanner 5.96
43 TestAddons/parallel/LocalPath 8.73
44 TestAddons/parallel/NvidiaDevicePlugin 6.64
45 TestAddons/parallel/Yakd 11.87
47 TestAddons/StoppedEnableDisable 12.26
48 TestCertOptions 32.89
49 TestCertExpiration 227.8
51 TestForceSystemdFlag 39.57
52 TestForceSystemdEnv 45.21
53 TestDockerEnvContainerd 48.17
58 TestErrorSpam/setup 31.27
59 TestErrorSpam/start 0.76
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 1.75
62 TestErrorSpam/unpause 1.89
63 TestErrorSpam/stop 1.46
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 50.1
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.64
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.18
75 TestFunctional/serial/CacheCmd/cache/add_local 1.3
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
83 TestFunctional/serial/ExtraConfig 46.31
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.73
86 TestFunctional/serial/LogsFileCmd 1.81
87 TestFunctional/serial/InvalidService 4.42
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 11.24
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.28
93 TestFunctional/parallel/StatusCmd 1.27
97 TestFunctional/parallel/ServiceCmdConnect 11.75
98 TestFunctional/parallel/AddonsCmd 0.21
99 TestFunctional/parallel/PersistentVolumeClaim 27.29
101 TestFunctional/parallel/SSHCmd 0.74
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 2.16
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.21
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.5
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
127 TestFunctional/parallel/ServiceCmd/List 0.59
128 TestFunctional/parallel/ProfileCmd/profile_list 0.5
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
132 TestFunctional/parallel/MountCmd/any-port 8.82
133 TestFunctional/parallel/ServiceCmd/Format 0.53
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/MountCmd/specific-port 2.38
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.75
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.41
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.91
144 TestFunctional/parallel/ImageCommands/Setup 0.75
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.53
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.37
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 114.39
163 TestMultiControlPlane/serial/DeployApp 31.83
164 TestMultiControlPlane/serial/PingHostFromPods 1.69
165 TestMultiControlPlane/serial/AddWorkerNode 24.73
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
168 TestMultiControlPlane/serial/CopyFile 19.42
169 TestMultiControlPlane/serial/StopSecondaryNode 12.79
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
171 TestMultiControlPlane/serial/RestartSecondaryNode 19.79
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.27
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 137.45
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.6
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
176 TestMultiControlPlane/serial/StopCluster 35.94
177 TestMultiControlPlane/serial/RestartCluster 62.39
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 45.05
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
184 TestJSONOutput/start/Command 46.53
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.67
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.77
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 38.08
210 TestKicCustomNetwork/use_default_bridge_network 31.77
211 TestKicExistingNetwork 31.78
212 TestKicCustomSubnet 34.64
213 TestKicStaticIP 32.29
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 67.54
218 TestMountStart/serial/StartWithMountFirst 6.74
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 8.52
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.21
225 TestMountStart/serial/RestartStopped 7.31
226 TestMountStart/serial/VerifyMountPostStop 0.26
229 TestMultiNode/serial/FreshStart2Nodes 67.82
230 TestMultiNode/serial/DeployApp2Nodes 15.66
231 TestMultiNode/serial/PingHostFrom2Pods 1.01
232 TestMultiNode/serial/AddNode 15.93
233 TestMultiNode/serial/MultiNodeLabels 0.1
234 TestMultiNode/serial/ProfileList 0.72
235 TestMultiNode/serial/CopyFile 10.09
236 TestMultiNode/serial/StopNode 2.25
237 TestMultiNode/serial/StartAfterStop 9.57
238 TestMultiNode/serial/RestartKeepsNodes 88.43
239 TestMultiNode/serial/DeleteNode 5.36
240 TestMultiNode/serial/StopMultiNode 23.89
241 TestMultiNode/serial/RestartMultiNode 53.64
242 TestMultiNode/serial/ValidateNameConflict 33.5
247 TestPreload 122.1
249 TestScheduledStopUnix 104.83
252 TestInsufficientStorage 12.94
253 TestRunningBinaryUpgrade 84.76
255 TestKubernetesUpgrade 349.32
256 TestMissingContainerUpgrade 172.81
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
259 TestNoKubernetes/serial/StartWithK8s 39.51
260 TestNoKubernetes/serial/StartWithStopK8s 19.21
261 TestNoKubernetes/serial/Start 5.74
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 0.98
264 TestNoKubernetes/serial/Stop 1.21
265 TestNoKubernetes/serial/StartNoArgs 6.5
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
267 TestStoppedBinaryUpgrade/Setup 0.72
268 TestStoppedBinaryUpgrade/Upgrade 110.26
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
278 TestPause/serial/Start 69.86
279 TestPause/serial/SecondStartNoReconfiguration 6.85
280 TestPause/serial/Pause 1.27
281 TestPause/serial/VerifyStatus 0.47
282 TestPause/serial/Unpause 0.99
283 TestPause/serial/PauseAgain 1.25
284 TestPause/serial/DeletePaused 3.22
285 TestPause/serial/VerifyDeletedResources 1.03
293 TestNetworkPlugins/group/false 5.14
298 TestStartStop/group/old-k8s-version/serial/FirstStart 162.95
300 TestStartStop/group/no-preload/serial/FirstStart 72.02
301 TestStartStop/group/old-k8s-version/serial/DeployApp 10.69
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.82
303 TestStartStop/group/old-k8s-version/serial/Stop 12.51
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
306 TestStartStop/group/no-preload/serial/DeployApp 9.47
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.45
308 TestStartStop/group/no-preload/serial/Stop 12.05
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
310 TestStartStop/group/no-preload/serial/SecondStart 266.91
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.33
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
314 TestStartStop/group/no-preload/serial/Pause 3.64
316 TestStartStop/group/embed-certs/serial/FirstStart 72.32
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.18
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
320 TestStartStop/group/old-k8s-version/serial/Pause 3.44
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.91
323 TestStartStop/group/embed-certs/serial/DeployApp 9.42
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.56
325 TestStartStop/group/embed-certs/serial/Stop 12.12
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/embed-certs/serial/SecondStart 289.6
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.71
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.2
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
337 TestStartStop/group/embed-certs/serial/Pause 3.23
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
340 TestStartStop/group/newest-cni/serial/FirstStart 44.23
341 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
342 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.72
343 TestNetworkPlugins/group/auto/Start 59.98
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.85
346 TestStartStop/group/newest-cni/serial/Stop 1.37
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
348 TestStartStop/group/newest-cni/serial/SecondStart 17.52
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
352 TestStartStop/group/newest-cni/serial/Pause 3.15
353 TestNetworkPlugins/group/auto/KubeletFlags 0.48
354 TestNetworkPlugins/group/auto/NetCatPod 10.45
355 TestNetworkPlugins/group/kindnet/Start 57.64
356 TestNetworkPlugins/group/auto/DNS 0.24
357 TestNetworkPlugins/group/auto/Localhost 0.2
358 TestNetworkPlugins/group/auto/HairPin 0.22
359 TestNetworkPlugins/group/calico/Start 68.13
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
362 TestNetworkPlugins/group/kindnet/NetCatPod 10.44
363 TestNetworkPlugins/group/kindnet/DNS 0.23
364 TestNetworkPlugins/group/kindnet/Localhost 0.19
365 TestNetworkPlugins/group/kindnet/HairPin 0.18
366 TestNetworkPlugins/group/custom-flannel/Start 54.92
367 TestNetworkPlugins/group/calico/ControllerPod 6
368 TestNetworkPlugins/group/calico/KubeletFlags 0.35
369 TestNetworkPlugins/group/calico/NetCatPod 11.36
370 TestNetworkPlugins/group/calico/DNS 0.34
371 TestNetworkPlugins/group/calico/Localhost 0.17
372 TestNetworkPlugins/group/calico/HairPin 0.2
373 TestNetworkPlugins/group/enable-default-cni/Start 75.51
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.5
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
376 TestNetworkPlugins/group/custom-flannel/DNS 0.22
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
379 TestNetworkPlugins/group/flannel/Start 52.9
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.42
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
387 TestNetworkPlugins/group/flannel/NetCatPod 11.33
388 TestNetworkPlugins/group/bridge/Start 75.31
389 TestNetworkPlugins/group/flannel/DNS 0.25
390 TestNetworkPlugins/group/flannel/Localhost 0.21
391 TestNetworkPlugins/group/flannel/HairPin 0.2
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
393 TestNetworkPlugins/group/bridge/NetCatPod 10.27
394 TestNetworkPlugins/group/bridge/DNS 0.16
395 TestNetworkPlugins/group/bridge/Localhost 0.14
396 TestNetworkPlugins/group/bridge/HairPin 0.21
x
+
TestDownloadOnly/v1.20.0/json-events (6.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-212110 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-212110 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.737904281s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 10:24:29.985718  581629 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0210 10:24:29.985799  581629 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-212110
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-212110: exit status 85 (138.422227ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-212110 | jenkins | v1.35.0 | 10 Feb 25 10:24 UTC |          |
	|         | -p download-only-212110        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:24:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:24:23.293882  581634 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:24:23.294006  581634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:24:23.294018  581634 out.go:358] Setting ErrFile to fd 2...
	I0210 10:24:23.294024  581634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:24:23.294260  581634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	W0210 10:24:23.294395  581634 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20385-576242/.minikube/config/config.json: open /home/jenkins/minikube-integration/20385-576242/.minikube/config/config.json: no such file or directory
	I0210 10:24:23.294811  581634 out.go:352] Setting JSON to true
	I0210 10:24:23.295772  581634 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11208,"bootTime":1739171855,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 10:24:23.295852  581634 start.go:139] virtualization:  
	I0210 10:24:23.300206  581634 out.go:97] [download-only-212110] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0210 10:24:23.300380  581634 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball: no such file or directory
	I0210 10:24:23.300491  581634 notify.go:220] Checking for updates...
	I0210 10:24:23.304117  581634 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:24:23.307326  581634 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:24:23.310061  581634 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 10:24:23.312926  581634 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 10:24:23.315705  581634 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0210 10:24:23.321306  581634 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:24:23.321578  581634 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:24:23.352911  581634 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 10:24:23.353006  581634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:24:23.407128  581634 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:56 SystemTime:2025-02-10 10:24:23.398133105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:24:23.407239  581634 docker.go:318] overlay module found
	I0210 10:24:23.410303  581634 out.go:97] Using the docker driver based on user configuration
	I0210 10:24:23.410330  581634 start.go:297] selected driver: docker
	I0210 10:24:23.410338  581634 start.go:901] validating driver "docker" against <nil>
	I0210 10:24:23.410454  581634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:24:23.468959  581634 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:56 SystemTime:2025-02-10 10:24:23.453844399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:24:23.469159  581634 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:24:23.469455  581634 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0210 10:24:23.469622  581634 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 10:24:23.472743  581634 out.go:169] Using Docker driver with root privileges
	I0210 10:24:23.475597  581634 cni.go:84] Creating CNI manager for ""
	I0210 10:24:23.475660  581634 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 10:24:23.475674  581634 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 10:24:23.475763  581634 start.go:340] cluster config:
	{Name:download-only-212110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-212110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:24:23.478673  581634 out.go:97] Starting "download-only-212110" primary control-plane node in "download-only-212110" cluster
	I0210 10:24:23.478692  581634 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 10:24:23.481425  581634 out.go:97] Pulling base image v0.0.46 ...
	I0210 10:24:23.481448  581634 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0210 10:24:23.481691  581634 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 10:24:23.499449  581634 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0210 10:24:23.499669  581634 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0210 10:24:23.499778  581634 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0210 10:24:23.540452  581634 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0210 10:24:23.540480  581634 cache.go:56] Caching tarball of preloaded images
	I0210 10:24:23.540641  581634 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0210 10:24:23.543846  581634 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 10:24:23.543878  581634 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0210 10:24:23.625810  581634 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0210 10:24:28.043183  581634 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-212110 host does not exist
	  To start a cluster, run: "minikube start -p download-only-212110"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-212110
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-357573 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-357573 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.227926733s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 10:24:35.726614  581629 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 10:24:35.726656  581629 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-357573
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-357573: exit status 85 (95.270422ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-212110 | jenkins | v1.35.0 | 10 Feb 25 10:24 UTC |                     |
	|         | -p download-only-212110        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 Feb 25 10:24 UTC | 10 Feb 25 10:24 UTC |
	| delete  | -p download-only-212110        | download-only-212110 | jenkins | v1.35.0 | 10 Feb 25 10:24 UTC | 10 Feb 25 10:24 UTC |
	| start   | -o=json --download-only        | download-only-357573 | jenkins | v1.35.0 | 10 Feb 25 10:24 UTC |                     |
	|         | -p download-only-357573        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:24:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:24:30.549627  581830 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:24:30.549795  581830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:24:30.549803  581830 out.go:358] Setting ErrFile to fd 2...
	I0210 10:24:30.549809  581830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:24:30.550062  581830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:24:30.550462  581830 out.go:352] Setting JSON to true
	I0210 10:24:30.551310  581830 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11215,"bootTime":1739171855,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 10:24:30.551380  581830 start.go:139] virtualization:  
	I0210 10:24:30.554870  581830 out.go:97] [download-only-357573] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0210 10:24:30.555157  581830 notify.go:220] Checking for updates...
	I0210 10:24:30.558194  581830 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:24:30.561262  581830 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:24:30.564136  581830 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 10:24:30.567045  581830 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 10:24:30.570167  581830 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0210 10:24:30.575903  581830 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:24:30.576218  581830 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:24:30.597942  581830 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 10:24:30.598063  581830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:24:30.654039  581830 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-02-10 10:24:30.645544812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:24:30.654159  581830 docker.go:318] overlay module found
	I0210 10:24:30.658149  581830 out.go:97] Using the docker driver based on user configuration
	I0210 10:24:30.658182  581830 start.go:297] selected driver: docker
	I0210 10:24:30.658190  581830 start.go:901] validating driver "docker" against <nil>
	I0210 10:24:30.658312  581830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:24:30.705945  581830 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-02-10 10:24:30.697777336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:24:30.706144  581830 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:24:30.706417  581830 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0210 10:24:30.706584  581830 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 10:24:30.709651  581830 out.go:169] Using Docker driver with root privileges
	I0210 10:24:30.712364  581830 cni.go:84] Creating CNI manager for ""
	I0210 10:24:30.712426  581830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 10:24:30.712438  581830 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 10:24:30.712525  581830 start.go:340] cluster config:
	{Name:download-only-357573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-357573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:24:30.715451  581830 out.go:97] Starting "download-only-357573" primary control-plane node in "download-only-357573" cluster
	I0210 10:24:30.715475  581830 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 10:24:30.718357  581830 out.go:97] Pulling base image v0.0.46 ...
	I0210 10:24:30.718384  581830 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 10:24:30.718484  581830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 10:24:30.733820  581830 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0210 10:24:30.733938  581830 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0210 10:24:30.733961  581830 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0210 10:24:30.733967  581830 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0210 10:24:30.733975  581830 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0210 10:24:30.777158  581830 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0210 10:24:30.777191  581830 cache.go:56] Caching tarball of preloaded images
	I0210 10:24:30.777957  581830 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 10:24:30.780983  581830 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0210 10:24:30.781005  581830 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0210 10:24:30.859798  581830 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:3dfa1a6dfbdb6fd11337c34d558e517e -> /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0210 10:24:34.222693  581830 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0210 10:24:34.222807  581830 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0210 10:24:35.096620  581830 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0210 10:24:35.097042  581830 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/download-only-357573/config.json ...
	I0210 10:24:35.097084  581830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/download-only-357573/config.json: {Name:mk90ec9e6ef4b7909c178c131f236b23b027a064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:24:35.097963  581830 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 10:24:35.098196  581830 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20385-576242/.minikube/cache/linux/arm64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-357573 host does not exist
	  To start a cluster, run: "minikube start -p download-only-357573"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-357573
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 10:24:37.067903  581629 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-895573 --alsologtostderr --binary-mirror http://127.0.0.1:39211 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-895573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-895573
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-624397
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-624397: exit status 85 (77.068707ms)

                                                
                                                
-- stdout --
	* Profile "addons-624397" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-624397"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-624397
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-624397: exit status 85 (80.957518ms)

                                                
                                                
-- stdout --
	* Profile "addons-624397" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-624397"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (216.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-624397 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-624397 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m36.054243422s)
--- PASS: TestAddons/Setup (216.06s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.1s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 60.040776ms
addons_test.go:815: volcano-admission stabilized in 60.659611ms
addons_test.go:807: volcano-scheduler stabilized in 61.984813ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-cjr76" [1bae38f0-fbd3-4c63-96c5-a4b8a2276303] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003755213s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-5fnr4" [b096791f-bec3-4887-9ca2-e2b74bf013fb] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003611216s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-vcgv5" [2f7d4b08-3e79-4c19-81a1-250a819ffead] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002805805s
addons_test.go:842: (dbg) Run:  kubectl --context addons-624397 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-624397 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-624397 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [eec13ce0-3580-4bf3-b674-493963924fc3] Pending
helpers_test.go:344: "test-job-nginx-0" [eec13ce0-3580-4bf3-b674-493963924fc3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [eec13ce0-3580-4bf3-b674-493963924fc3] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003395091s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable volcano --alsologtostderr -v=1: (11.367217501s)
--- PASS: TestAddons/serial/Volcano (40.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-624397 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-624397 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-624397 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-624397 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b72846a-0b59-4432-9213-d1b4b559787d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b72846a-0b59-4432-9213-d1b4b559787d] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003236925s
addons_test.go:633: (dbg) Run:  kubectl --context addons-624397 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-624397 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-624397 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-624397 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.050964ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-wlhkw" [94905553-6e8f-4b19-90a7-e95f330ce950] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002894618s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7wlh5" [0a30486a-eb5d-43f3-a3e1-88a3d452ca8a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003642817s
addons_test.go:331: (dbg) Run:  kubectl --context addons-624397 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-624397 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-624397 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.446879619s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 ip
2025/02/10 10:29:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-624397 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-624397 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-624397 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0d7fb533-3446-4062-b16a-c7b85b3e4c08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0d7fb533-3446-4062-b16a-c7b85b3e4c08] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.007493053s
I0210 10:30:19.888273  581629 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-624397 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable ingress-dns --alsologtostderr -v=1: (1.858600897s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable ingress --alsologtostderr -v=1: (8.001121816s)
--- PASS: TestAddons/parallel/Ingress (19.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rwphx" [c6843cfa-0131-4b5b-9470-dedf03e56885] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003614082s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable inspektor-gadget --alsologtostderr -v=1: (5.796636591s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.690127ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-cfcxf" [b6734fec-3e2b-49f0-b734-b0fef461e7c2] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004076154s
addons_test.go:402: (dbg) Run:  kubectl --context addons-624397 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0210 10:29:37.584168  581629 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0210 10:29:37.589308  581629 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 10:29:37.589338  581629 kapi.go:107] duration metric: took 8.198433ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.209115ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-624397 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-624397 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [700ac658-42b0-450e-afd9-8522afbe4faf] Pending
helpers_test.go:344: "task-pv-pod" [700ac658-42b0-450e-afd9-8522afbe4faf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [700ac658-42b0-450e-afd9-8522afbe4faf] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003980514s
addons_test.go:511: (dbg) Run:  kubectl --context addons-624397 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-624397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-624397 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-624397 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-624397 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-624397 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-624397 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d595dd5f-6149-4f16-bb86-46822f289649] Pending
helpers_test.go:344: "task-pv-pod-restore" [d595dd5f-6149-4f16-bb86-46822f289649] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d595dd5f-6149-4f16-bb86-46822f289649] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003587973s
addons_test.go:553: (dbg) Run:  kubectl --context addons-624397 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-624397 delete pod task-pv-pod-restore: (1.329291812s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-624397 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-624397 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable volumesnapshots --alsologtostderr -v=1: (1.276597107s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.04937413s)
--- PASS: TestAddons/parallel/CSI (52.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-624397 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-624397 --alsologtostderr -v=1: (1.304231894s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-nmkqg" [ef900922-17db-409c-a152-1ecfb619f6cc] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-nmkqg" [ef900922-17db-409c-a152-1ecfb619f6cc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-nmkqg" [ef900922-17db-409c-a152-1ecfb619f6cc] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004117179s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable headlamp --alsologtostderr -v=1: (5.880734191s)
--- PASS: TestAddons/parallel/Headlamp (16.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-wwqdb" [a36bb28a-b094-4e00-9537-7568eb6fa514] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011998518s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.96s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-624397 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-624397 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-624397 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [78d4e47f-44d1-485b-a5b3-cd1fe916b197] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [78d4e47f-44d1-485b-a5b3-cd1fe916b197] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [78d4e47f-44d1-485b-a5b3-cd1fe916b197] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003417704s
addons_test.go:906: (dbg) Run:  kubectl --context addons-624397 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 ssh "cat /opt/local-path-provisioner/pvc-7acbc91e-4f4e-44e3-b215-376fc48c622a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-624397 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-624397 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rrprb" [49e6bb7f-704e-41fe-b748-e1058353b90a] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003372026s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-lt2pj" [1c470361-01ee-4f17-a6df-6bf90670c726] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003206988s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-624397 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-624397 addons disable yakd --alsologtostderr -v=1: (5.864125402s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-624397
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-624397: (11.959046023s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-624397
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-624397
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-624397
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (32.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-679762 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-679762 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.149583425s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-679762 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-679762 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-679762 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-679762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-679762
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-679762: (2.05218215s)
--- PASS: TestCertOptions (32.89s)

                                                
                                    
x
+
TestCertExpiration (227.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-369393 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-369393 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.331912045s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-369393 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-369393 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.127776117s)
helpers_test.go:175: Cleaning up "cert-expiration-369393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-369393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-369393: (2.342208874s)
--- PASS: TestCertExpiration (227.80s)

                                                
                                    
x
+
TestForceSystemdFlag (39.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-929231 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0210 11:06:16.946470  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-929231 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.376089267s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-929231 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-929231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-929231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-929231: (2.686305774s)
--- PASS: TestForceSystemdFlag (39.57s)

                                                
                                    
x
+
TestForceSystemdEnv (45.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-962978 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-962978 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.213136883s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-962978 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-962978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-962978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-962978: (2.597900689s)
--- PASS: TestForceSystemdEnv (45.21s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.17s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-335203 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-335203 --driver=docker  --container-runtime=containerd: (32.585090119s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-335203"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KjPfKFFgPdH9/agent.602847" SSH_AGENT_PID="602848" DOCKER_HOST=ssh://docker@127.0.0.1:33508 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KjPfKFFgPdH9/agent.602847" SSH_AGENT_PID="602848" DOCKER_HOST=ssh://docker@127.0.0.1:33508 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KjPfKFFgPdH9/agent.602847" SSH_AGENT_PID="602848" DOCKER_HOST=ssh://docker@127.0.0.1:33508 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.119393947s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-KjPfKFFgPdH9/agent.602847" SSH_AGENT_PID="602848" DOCKER_HOST=ssh://docker@127.0.0.1:33508 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-335203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-335203
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-335203: (1.99406398s)
--- PASS: TestDockerEnvContainerd (48.17s)

                                                
                                    
x
+
TestErrorSpam/setup (31.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-633368 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-633368 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-633368 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-633368 --driver=docker  --container-runtime=containerd: (31.272145111s)
--- PASS: TestErrorSpam/setup (31.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 stop: (1.263036189s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-633368 --log_dir /tmp/nospam-633368 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/test/nested/copy/581629/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-arm64 start -p functional-388309 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2251: (dbg) Done: out/minikube-linux-arm64 start -p functional-388309 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.09024787s)
--- PASS: TestFunctional/serial/StartWithProxy (50.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 10:33:11.151510  581629 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-arm64 start -p functional-388309 --alsologtostderr -v=8
E0210 10:33:13.872111  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:13.878478  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:13.889809  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:13.911190  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:13.952922  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:14.034240  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:14.195838  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:14.518088  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:15.159680  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:16.441012  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-arm64 start -p functional-388309 --alsologtostderr -v=8: (6.634030264s)
functional_test.go:680: soft start took 6.63811536s for "functional-388309" cluster.
I0210 10:33:17.785867  581629 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (6.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-388309 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cache add registry.k8s.io/pause:3.1
E0210 10:33:19.005613  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 cache add registry.k8s.io/pause:3.1: (1.549922354s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 cache add registry.k8s.io/pause:3.3: (1.376800356s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 cache add registry.k8s.io/pause:latest: (1.254800297s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-388309 /tmp/TestFunctionalserialCacheCmdcacheadd_local3068030326/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cache add minikube-local-cache-test:functional-388309
functional_test.go:1111: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cache delete minikube-local-cache-test:functional-388309
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-388309
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh sudo crictl rmi registry.k8s.io/pause:latest
E0210 10:33:24.127094  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1170: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.719137ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 cache reload: (1.142556155s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 kubectl -- --context functional-388309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-388309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-arm64 start -p functional-388309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 10:33:34.368582  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:33:54.850769  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-arm64 start -p functional-388309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.309544061s)
functional_test.go:778: restart took 46.309641259s for "functional-388309" cluster.
I0210 10:34:12.641611  581629 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (46.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-388309 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 logs: (1.730578604s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 logs --file /tmp/TestFunctionalserialLogsFileCmd210222954/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 logs --file /tmp/TestFunctionalserialLogsFileCmd210222954/001/logs.txt: (1.809181121s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-388309 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-388309
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-388309: exit status 115 (729.402441ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30388 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-388309 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 config get cpus: exit status 14 (90.775992ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 config get cpus: exit status 14 (89.742407ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-388309 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-388309 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 617792: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-388309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-388309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (210.186562ms)

                                                
                                                
-- stdout --
	* [functional-388309] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:34:54.852409  617513 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:34:54.852601  617513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:34:54.852629  617513 out.go:358] Setting ErrFile to fd 2...
	I0210 10:34:54.852647  617513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:34:54.852909  617513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:34:54.853375  617513 out.go:352] Setting JSON to false
	I0210 10:34:54.854505  617513 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11840,"bootTime":1739171855,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 10:34:54.854610  617513 start.go:139] virtualization:  
	I0210 10:34:54.858082  617513 out.go:177] * [functional-388309] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0210 10:34:54.861134  617513 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:34:54.861282  617513 notify.go:220] Checking for updates...
	I0210 10:34:54.866765  617513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:34:54.869495  617513 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 10:34:54.872448  617513 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 10:34:54.875318  617513 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0210 10:34:54.878205  617513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:34:54.881690  617513 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 10:34:54.882487  617513 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:34:54.914696  617513 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 10:34:54.914824  617513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:34:54.978244  617513 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-10 10:34:54.968795208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:34:54.978388  617513 docker.go:318] overlay module found
	I0210 10:34:54.981580  617513 out.go:177] * Using the docker driver based on existing profile
	I0210 10:34:54.984402  617513 start.go:297] selected driver: docker
	I0210 10:34:54.984420  617513 start.go:901] validating driver "docker" against &{Name:functional-388309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-388309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:34:54.984521  617513 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:34:54.987980  617513 out.go:201] 
	W0210 10:34:54.990977  617513 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0210 10:34:54.993817  617513 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-arm64 start -p functional-388309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 start -p functional-388309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-388309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (279.718697ms)

                                                
                                                
-- stdout --
	* [functional-388309] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:34:54.573932  617401 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:34:54.574114  617401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:34:54.574146  617401 out.go:358] Setting ErrFile to fd 2...
	I0210 10:34:54.574167  617401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:34:54.575077  617401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:34:54.575533  617401 out.go:352] Setting JSON to false
	I0210 10:34:54.576613  617401 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11839,"bootTime":1739171855,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 10:34:54.576723  617401 start.go:139] virtualization:  
	I0210 10:34:54.580644  617401 out.go:177] * [functional-388309] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0210 10:34:54.583618  617401 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:34:54.583717  617401 notify.go:220] Checking for updates...
	I0210 10:34:54.590509  617401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:34:54.593889  617401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 10:34:54.596904  617401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 10:34:54.602549  617401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0210 10:34:54.605700  617401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:34:54.609616  617401 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 10:34:54.610342  617401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:34:54.657204  617401 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 10:34:54.657439  617401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:34:54.773808  617401 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-10 10:34:54.763618793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:34:54.773952  617401 docker.go:318] overlay module found
	I0210 10:34:54.777720  617401 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0210 10:34:54.780556  617401 start.go:297] selected driver: docker
	I0210 10:34:54.780579  617401 start.go:901] validating driver "docker" against &{Name:functional-388309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-388309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:34:54.780691  617401 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:34:54.784214  617401 out.go:201] 
	W0210 10:34:54.787159  617401 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 10:34:54.790003  617401 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1644: (dbg) Run:  kubectl --context functional-388309 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-388309 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-xsn59" [b29c72f2-87f5-4fde-bea8-452de9ca0614] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-xsn59" [b29c72f2-87f5-4fde-bea8-452de9ca0614] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004110915s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32705
functional_test.go:1692: http://192.168.49.2:32705: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-xsn59

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32705
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2987571e-ba62-4b07-a109-1de54a4850f6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004084301s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-388309 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-388309 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-388309 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-388309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7292d35-111a-430d-9984-00a05bb902ec] Pending
helpers_test.go:344: "sp-pod" [c7292d35-111a-430d-9984-00a05bb902ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c7292d35-111a-430d-9984-00a05bb902ec] Running
E0210 10:34:35.812682  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004309455s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-388309 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-388309 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-388309 delete -f testdata/storage-provisioner/pod.yaml: (1.251439785s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-388309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f96e73f2-b1f8-45c0-9728-cda7ab98b6bd] Pending
helpers_test.go:344: "sp-pod" [f96e73f2-b1f8-45c0-9728-cda7ab98b6bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f96e73f2-b1f8-45c0-9728-cda7ab98b6bd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002790892s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-388309 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh -n functional-388309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cp functional-388309:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3882616995/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh -n functional-388309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh -n functional-388309 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/581629/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /etc/test/nested/copy/581629/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/581629.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /etc/ssl/certs/581629.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/581629.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /usr/share/ca-certificates/581629.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/5816292.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /etc/ssl/certs/5816292.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/5816292.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /usr/share/ca-certificates/5816292.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-388309 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo systemctl is-active docker"
2025/02/10 10:35:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh "sudo systemctl is-active docker": exit status 1 (304.242705ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh "sudo systemctl is-active crio": exit status 1 (309.276507ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-388309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-388309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-388309 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 615089: os: process already finished
helpers_test.go:502: unable to terminate pid 614894: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-388309 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-388309 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-388309 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c47b3db2-2937-4088-8769-1db744697172] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c47b3db2-2937-4088-8769-1db744697172] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003341071s
I0210 10:34:31.164972  581629 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-388309 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.12.100 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-388309 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1454: (dbg) Run:  kubectl --context functional-388309 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-388309 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-pm6d9" [d0b33d3a-c67b-4584-8ebd-19a0de57ffa8] Pending
helpers_test.go:344: "hello-node-64fc58db8c-pm6d9" [d0b33d3a-c67b-4584-8ebd-19a0de57ffa8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-pm6d9" [d0b33d3a-c67b-4584-8ebd-19a0de57ffa8] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003720245s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1332: Took "415.474081ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1346: Took "88.870644ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 service list -o json
functional_test.go:1511: Took "614.900464ms" to run "out/minikube-linux-arm64 -p functional-388309 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1383: Took "436.334393ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1396: Took "75.075941ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31067
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdany-port4115829335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739183692021819480" to /tmp/TestFunctionalparallelMountCmdany-port4115829335/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739183692021819480" to /tmp/TestFunctionalparallelMountCmdany-port4115829335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739183692021819480" to /tmp/TestFunctionalparallelMountCmdany-port4115829335/001/test-1739183692021819480
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (486.973328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:34:52.509738  581629 retry.go:31] will retry after 461.830995ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 10 10:34 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 10 10:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 10 10:34 test-1739183692021819480
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh cat /mount-9p/test-1739183692021819480
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-388309 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c497f347-4ad1-4d5f-aa52-0ec88aad03b1] Pending
helpers_test.go:344: "busybox-mount" [c497f347-4ad1-4d5f-aa52-0ec88aad03b1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c497f347-4ad1-4d5f-aa52-0ec88aad03b1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c497f347-4ad1-4d5f-aa52-0ec88aad03b1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003840164s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-388309 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdany-port4115829335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31067
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdspecific-port1546519485/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (487.177499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:35:01.327559  581629 retry.go:31] will retry after 553.92116ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdspecific-port1546519485/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh "sudo umount -f /mount-9p": exit status 1 (390.718499ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-388309 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdspecific-port1546519485/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3390697746/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3390697746/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3390697746/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T" /mount1: exit status 1 (1.203610616s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 10:35:04.425666  581629 retry.go:31] will retry after 650.064625ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-388309 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3390697746/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3390697746/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-388309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3390697746/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 version -o=json --components: (1.4049724s)
--- PASS: TestFunctional/parallel/Version/components (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-388309 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-388309
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-388309
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-388309 image ls --format short --alsologtostderr:
I0210 10:35:13.682804  620428 out.go:345] Setting OutFile to fd 1 ...
I0210 10:35:13.683075  620428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.683096  620428 out.go:358] Setting ErrFile to fd 2...
I0210 10:35:13.683117  620428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.683409  620428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 10:35:13.684206  620428 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.684349  620428 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.684922  620428 cli_runner.go:164] Run: docker container inspect functional-388309 --format={{.State.Status}}
I0210 10:35:13.704326  620428 ssh_runner.go:195] Run: systemctl --version
I0210 10:35:13.704376  620428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388309
I0210 10:35:13.729988  620428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/functional-388309/id_rsa Username:docker}
I0210 10:35:13.826708  620428 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-388309 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:265c2d | 26.2MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e124fb | 27.4MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | alpine             | sha256:525fa8 | 21.7MB |
| docker.io/library/nginx                     | latest             | sha256:9b1b7b | 68.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-388309  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:293376 | 24MB   |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:ddb38c | 18.9MB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:e1181e | 35.7MB |
| docker.io/library/minikube-local-cache-test | functional-388309  | sha256:407fa9 | 992B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-388309 image ls --format table --alsologtostderr:
I0210 10:35:13.970843  620494 out.go:345] Setting OutFile to fd 1 ...
I0210 10:35:13.970954  620494 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.971016  620494 out.go:358] Setting ErrFile to fd 2...
I0210 10:35:13.971026  620494 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.971287  620494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 10:35:13.971945  620494 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.972067  620494 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.972557  620494 cli_runner.go:164] Run: docker container inspect functional-388309 --format={{.State.Status}}
I0210 10:35:13.992873  620494 ssh_runner.go:195] Run: systemctl --version
I0210 10:35:13.992927  620494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388309
I0210 10:35:14.013758  620494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/functional-388309/id_rsa Username:docker}
I0210 10:35:14.114225  620494 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-388309 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-388309"],"size":"2173567"},{"id":"sha256:e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"35679862"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"},{"id":"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b
274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"23968433"},{"id":"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"18922457"},{"id":"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"26217748"},{"id":"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"27363416"},{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38
fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:525fa81b865c3bef8743265945df1859f8f0cb06a4f71aacbcb54f2fbd5a57d8","repoDigests":["docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21680278"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d2
1afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:407fa96c387319a1c6667d52db084388df2e2ed5c60ff7eaa5baaab075ee7a77","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-388309"],"size":"992"},{"id":"sha25
6:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58","repoDigests":["docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"68631146"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-388309 image ls --format json --alsologtostderr:
I0210 10:35:13.967856  620490 out.go:345] Setting OutFile to fd 1 ...
I0210 10:35:13.968067  620490 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.968093  620490 out.go:358] Setting ErrFile to fd 2...
I0210 10:35:13.968112  620490 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.968389  620490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 10:35:13.969126  620490 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.969307  620490 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.969860  620490 cli_runner.go:164] Run: docker container inspect functional-388309 --format={{.State.Status}}
I0210 10:35:13.991527  620490 ssh_runner.go:195] Run: systemctl --version
I0210 10:35:13.991582  620490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388309
I0210 10:35:14.014240  620490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/functional-388309/id_rsa Username:docker}
I0210 10:35:14.102012  620490 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-arm64 -p functional-388309 image ls --format yaml --alsologtostderr:
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "26217748"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:9b1b7be1ffa607d40d545607d3fdf441f08553468adec5588fb58499ad77fe58
repoDigests:
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "68631146"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "18922457"
- id: sha256:e1181ee320546c66f17956a302db1b7899d88a593f116726718851133de588b6
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "35679862"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "27363416"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-388309
size: "2173567"
- id: sha256:407fa96c387319a1c6667d52db084388df2e2ed5c60ff7eaa5baaab075ee7a77
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-388309
size: "992"
- id: sha256:525fa81b865c3bef8743265945df1859f8f0cb06a4f71aacbcb54f2fbd5a57d8
repoDigests:
- docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef
repoTags:
- docker.io/library/nginx:alpine
size: "21680278"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "23968433"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-arm64 -p functional-388309 image ls --format yaml --alsologtostderr:
I0210 10:35:13.682973  620429 out.go:345] Setting OutFile to fd 1 ...
I0210 10:35:13.683088  620429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.683189  620429 out.go:358] Setting ErrFile to fd 2...
I0210 10:35:13.683198  620429 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:13.683492  620429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 10:35:13.684152  620429 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.684389  620429 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:13.684919  620429 cli_runner.go:164] Run: docker container inspect functional-388309 --format={{.State.Status}}
I0210 10:35:13.704536  620429 ssh_runner.go:195] Run: systemctl --version
I0210 10:35:13.704596  620429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388309
I0210 10:35:13.728414  620429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/functional-388309/id_rsa Username:docker}
I0210 10:35:13.827321  620429 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-388309 ssh pgrep buildkitd: exit status 1 (275.279466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image build -t localhost/my-image:functional-388309 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 image build -t localhost/my-image:functional-388309 testdata/build --alsologtostderr: (3.399189247s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-arm64 -p functional-388309 image build -t localhost/my-image:functional-388309 testdata/build --alsologtostderr:
I0210 10:35:14.480908  620614 out.go:345] Setting OutFile to fd 1 ...
I0210 10:35:14.481673  620614 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:14.481692  620614 out.go:358] Setting ErrFile to fd 2...
I0210 10:35:14.481699  620614 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:35:14.481957  620614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 10:35:14.482650  620614 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:14.485153  620614 config.go:182] Loaded profile config "functional-388309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 10:35:14.485681  620614 cli_runner.go:164] Run: docker container inspect functional-388309 --format={{.State.Status}}
I0210 10:35:14.505065  620614 ssh_runner.go:195] Run: systemctl --version
I0210 10:35:14.505125  620614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388309
I0210 10:35:14.529230  620614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/functional-388309/id_rsa Username:docker}
I0210 10:35:14.617987  620614 build_images.go:161] Building image from path: /tmp/build.2031538832.tar
I0210 10:35:14.618056  620614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 10:35:14.626913  620614 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2031538832.tar
I0210 10:35:14.630552  620614 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2031538832.tar: stat -c "%s %y" /var/lib/minikube/build/build.2031538832.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2031538832.tar': No such file or directory
I0210 10:35:14.630592  620614 ssh_runner.go:362] scp /tmp/build.2031538832.tar --> /var/lib/minikube/build/build.2031538832.tar (3072 bytes)
I0210 10:35:14.662983  620614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2031538832
I0210 10:35:14.672078  620614 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2031538832 -xf /var/lib/minikube/build/build.2031538832.tar
I0210 10:35:14.681643  620614 containerd.go:394] Building image: /var/lib/minikube/build/build.2031538832
I0210 10:35:14.681735  620614 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2031538832 --local dockerfile=/var/lib/minikube/build/build.2031538832 --output type=image,name=localhost/my-image:functional-388309
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:16fb2a974ac9f0987e05420f2834919dc7776dc5b45943e6d8a5bbcd072a5a9a
#8 exporting manifest sha256:16fb2a974ac9f0987e05420f2834919dc7776dc5b45943e6d8a5bbcd072a5a9a 0.0s done
#8 exporting config sha256:f48dfa06684c75c9c03bc2d70e2be1c54fb6a6b9aadb9332db6a86a162d1fa48 0.0s done
#8 naming to localhost/my-image:functional-388309 done
#8 DONE 0.2s
I0210 10:35:17.801323  620614 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2031538832 --local dockerfile=/var/lib/minikube/build/build.2031538832 --output type=image,name=localhost/my-image:functional-388309: (3.119554976s)
I0210 10:35:17.801403  620614 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2031538832
I0210 10:35:17.811405  620614 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2031538832.tar
I0210 10:35:17.821503  620614 build_images.go:217] Built localhost/my-image:functional-388309 from /tmp/build.2031538832.tar
I0210 10:35:17.821568  620614 build_images.go:133] succeeded building to: functional-388309
I0210 10:35:17.821574  620614 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-388309
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image load --daemon kicbase/echo-server:functional-388309 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 image load --daemon kicbase/echo-server:functional-388309 --alsologtostderr: (1.215734335s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image load --daemon kicbase/echo-server:functional-388309 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-arm64 -p functional-388309 image load --daemon kicbase/echo-server:functional-388309 --alsologtostderr: (1.080862301s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-388309
functional_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image load --daemon kicbase/echo-server:functional-388309 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image save kicbase/echo-server:functional-388309 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image rm kicbase/echo-server:functional-388309 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-388309
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-388309 image save --daemon kicbase/echo-server:functional-388309 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-388309
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-388309
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-388309
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-388309
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-558727 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0210 10:35:57.736697  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-558727 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m53.525133301s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (114.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-558727 -- rollout status deployment/busybox: (28.679613399s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-b78kb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-hdx8r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-m952z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-b78kb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-hdx8r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-m952z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-b78kb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-hdx8r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-m952z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-b78kb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-b78kb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-hdx8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-hdx8r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-m952z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-558727 -- exec busybox-58667487b6-m952z -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-558727 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-558727 -v=7 --alsologtostderr: (23.768763058s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-558727 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0210 10:38:13.870009  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 status --output json -v=7 --alsologtostderr: (1.044337013s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp testdata/cp-test.txt ha-558727:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2012868117/001/cp-test_ha-558727.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727:/home/docker/cp-test.txt ha-558727-m02:/home/docker/cp-test_ha-558727_ha-558727-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test_ha-558727_ha-558727-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727:/home/docker/cp-test.txt ha-558727-m03:/home/docker/cp-test_ha-558727_ha-558727-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test_ha-558727_ha-558727-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727:/home/docker/cp-test.txt ha-558727-m04:/home/docker/cp-test_ha-558727_ha-558727-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test_ha-558727_ha-558727-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp testdata/cp-test.txt ha-558727-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2012868117/001/cp-test_ha-558727-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m02:/home/docker/cp-test.txt ha-558727:/home/docker/cp-test_ha-558727-m02_ha-558727.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test_ha-558727-m02_ha-558727.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m02:/home/docker/cp-test.txt ha-558727-m03:/home/docker/cp-test_ha-558727-m02_ha-558727-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test_ha-558727-m02_ha-558727-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m02:/home/docker/cp-test.txt ha-558727-m04:/home/docker/cp-test_ha-558727-m02_ha-558727-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test_ha-558727-m02_ha-558727-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp testdata/cp-test.txt ha-558727-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2012868117/001/cp-test_ha-558727-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m03:/home/docker/cp-test.txt ha-558727:/home/docker/cp-test_ha-558727-m03_ha-558727.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test_ha-558727-m03_ha-558727.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m03:/home/docker/cp-test.txt ha-558727-m02:/home/docker/cp-test_ha-558727-m03_ha-558727-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test_ha-558727-m03_ha-558727-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m03:/home/docker/cp-test.txt ha-558727-m04:/home/docker/cp-test_ha-558727-m03_ha-558727-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test_ha-558727-m03_ha-558727-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp testdata/cp-test.txt ha-558727-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2012868117/001/cp-test_ha-558727-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m04:/home/docker/cp-test.txt ha-558727:/home/docker/cp-test_ha-558727-m04_ha-558727.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727 "sudo cat /home/docker/cp-test_ha-558727-m04_ha-558727.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m04:/home/docker/cp-test.txt ha-558727-m02:/home/docker/cp-test_ha-558727-m04_ha-558727-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m02 "sudo cat /home/docker/cp-test_ha-558727-m04_ha-558727-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 cp ha-558727-m04:/home/docker/cp-test.txt ha-558727-m03:/home/docker/cp-test_ha-558727-m04_ha-558727-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 ssh -n ha-558727-m03 "sudo cat /home/docker/cp-test_ha-558727-m04_ha-558727-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 node stop m02 -v=7 --alsologtostderr
E0210 10:38:41.582919  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 node stop m02 -v=7 --alsologtostderr: (12.050115882s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr: exit status 7 (737.31598ms)

                                                
                                                
-- stdout --
	ha-558727
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558727-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-558727-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-558727-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:38:46.096021  637039 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:38:46.096508  637039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:38:46.096548  637039 out.go:358] Setting ErrFile to fd 2...
	I0210 10:38:46.096567  637039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:38:46.096871  637039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:38:46.097120  637039 out.go:352] Setting JSON to false
	I0210 10:38:46.097179  637039 mustload.go:65] Loading cluster: ha-558727
	I0210 10:38:46.100649  637039 config.go:182] Loaded profile config "ha-558727": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 10:38:46.100722  637039 status.go:174] checking status of ha-558727 ...
	I0210 10:38:46.101385  637039 cli_runner.go:164] Run: docker container inspect ha-558727 --format={{.State.Status}}
	I0210 10:38:46.101661  637039 notify.go:220] Checking for updates...
	I0210 10:38:46.120126  637039 status.go:371] ha-558727 host status = "Running" (err=<nil>)
	I0210 10:38:46.120150  637039 host.go:66] Checking if "ha-558727" exists ...
	I0210 10:38:46.120450  637039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-558727
	I0210 10:38:46.157060  637039 host.go:66] Checking if "ha-558727" exists ...
	I0210 10:38:46.157416  637039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:38:46.157491  637039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-558727
	I0210 10:38:46.187572  637039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/ha-558727/id_rsa Username:docker}
	I0210 10:38:46.282973  637039 ssh_runner.go:195] Run: systemctl --version
	I0210 10:38:46.287209  637039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:38:46.299992  637039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:38:46.362157  637039 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-10 10:38:46.35145397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:38:46.362785  637039 kubeconfig.go:125] found "ha-558727" server: "https://192.168.49.254:8443"
	I0210 10:38:46.362845  637039 api_server.go:166] Checking apiserver status ...
	I0210 10:38:46.362910  637039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:38:46.376163  637039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1484/cgroup
	I0210 10:38:46.386535  637039 api_server.go:182] apiserver freezer: "12:freezer:/docker/0fd2efb4992e9ef482aab0f725d5cf4d4525860843e305cb81c7a86b16034210/kubepods/burstable/pod514bd81eb19e89350ff3af2a5b164048/d66ca8a78907af19834a732a7ec354333d13c6203275bcd0b2f14939a4a84666"
	I0210 10:38:46.386612  637039 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0fd2efb4992e9ef482aab0f725d5cf4d4525860843e305cb81c7a86b16034210/kubepods/burstable/pod514bd81eb19e89350ff3af2a5b164048/d66ca8a78907af19834a732a7ec354333d13c6203275bcd0b2f14939a4a84666/freezer.state
	I0210 10:38:46.395734  637039 api_server.go:204] freezer state: "THAWED"
	I0210 10:38:46.395767  637039 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0210 10:38:46.404332  637039 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0210 10:38:46.404360  637039 status.go:463] ha-558727 apiserver status = Running (err=<nil>)
	I0210 10:38:46.404370  637039 status.go:176] ha-558727 status: &{Name:ha-558727 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:38:46.404385  637039 status.go:174] checking status of ha-558727-m02 ...
	I0210 10:38:46.404708  637039 cli_runner.go:164] Run: docker container inspect ha-558727-m02 --format={{.State.Status}}
	I0210 10:38:46.421671  637039 status.go:371] ha-558727-m02 host status = "Stopped" (err=<nil>)
	I0210 10:38:46.421695  637039 status.go:384] host is not running, skipping remaining checks
	I0210 10:38:46.421703  637039 status.go:176] ha-558727-m02 status: &{Name:ha-558727-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:38:46.421722  637039 status.go:174] checking status of ha-558727-m03 ...
	I0210 10:38:46.422024  637039 cli_runner.go:164] Run: docker container inspect ha-558727-m03 --format={{.State.Status}}
	I0210 10:38:46.440031  637039 status.go:371] ha-558727-m03 host status = "Running" (err=<nil>)
	I0210 10:38:46.440055  637039 host.go:66] Checking if "ha-558727-m03" exists ...
	I0210 10:38:46.440356  637039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-558727-m03
	I0210 10:38:46.457951  637039 host.go:66] Checking if "ha-558727-m03" exists ...
	I0210 10:38:46.458284  637039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:38:46.458332  637039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-558727-m03
	I0210 10:38:46.475584  637039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/ha-558727-m03/id_rsa Username:docker}
	I0210 10:38:46.563060  637039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:38:46.576505  637039 kubeconfig.go:125] found "ha-558727" server: "https://192.168.49.254:8443"
	I0210 10:38:46.576541  637039 api_server.go:166] Checking apiserver status ...
	I0210 10:38:46.576593  637039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:38:46.592911  637039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	I0210 10:38:46.605590  637039 api_server.go:182] apiserver freezer: "12:freezer:/docker/27cf1978ac89a2b85625190ce67537803d72b4fe56092f56e82b7a4b4e0fccda/kubepods/burstable/podee6313f1021555bbad6ba15e041ee8b9/a192d0c4643699a6f7c915f79a1dd42d217716fc9f92510e57ebf679e9469416"
	I0210 10:38:46.605708  637039 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27cf1978ac89a2b85625190ce67537803d72b4fe56092f56e82b7a4b4e0fccda/kubepods/burstable/podee6313f1021555bbad6ba15e041ee8b9/a192d0c4643699a6f7c915f79a1dd42d217716fc9f92510e57ebf679e9469416/freezer.state
	I0210 10:38:46.616642  637039 api_server.go:204] freezer state: "THAWED"
	I0210 10:38:46.616671  637039 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0210 10:38:46.624882  637039 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0210 10:38:46.624912  637039 status.go:463] ha-558727-m03 apiserver status = Running (err=<nil>)
	I0210 10:38:46.624923  637039 status.go:176] ha-558727-m03 status: &{Name:ha-558727-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:38:46.624964  637039 status.go:174] checking status of ha-558727-m04 ...
	I0210 10:38:46.625288  637039 cli_runner.go:164] Run: docker container inspect ha-558727-m04 --format={{.State.Status}}
	I0210 10:38:46.642403  637039 status.go:371] ha-558727-m04 host status = "Running" (err=<nil>)
	I0210 10:38:46.642429  637039 host.go:66] Checking if "ha-558727-m04" exists ...
	I0210 10:38:46.642730  637039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-558727-m04
	I0210 10:38:46.659789  637039 host.go:66] Checking if "ha-558727-m04" exists ...
	I0210 10:38:46.660092  637039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:38:46.660138  637039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-558727-m04
	I0210 10:38:46.678338  637039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/ha-558727-m04/id_rsa Username:docker}
	I0210 10:38:46.766434  637039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:38:46.777815  637039 status.go:176] ha-558727-m04 status: &{Name:ha-558727-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 node start m02 -v=7 --alsologtostderr: (18.358044528s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr: (1.303318946s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.27246506s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-558727 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-558727 -v=7 --alsologtostderr
E0210 10:39:22.669685  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:22.676033  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:22.687479  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:22.708921  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:22.750366  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:22.831845  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:22.993404  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:23.315414  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:23.956949  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:25.238438  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:27.801382  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:32.923620  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:39:43.165602  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-558727 -v=7 --alsologtostderr: (37.068004342s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-558727 --wait=true -v=7 --alsologtostderr
E0210 10:40:03.647294  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:40:44.609276  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-558727 --wait=true -v=7 --alsologtostderr: (1m40.174207484s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-558727
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 node delete m03 -v=7 --alsologtostderr: (9.618551221s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 stop -v=7 --alsologtostderr
E0210 10:42:06.532327  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 stop -v=7 --alsologtostderr: (35.824570444s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr: exit status 7 (119.217532ms)

                                                
                                                
-- stdout --
	ha-558727
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-558727-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-558727-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:42:13.299937  651659 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:42:13.300183  651659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:42:13.300216  651659 out.go:358] Setting ErrFile to fd 2...
	I0210 10:42:13.300238  651659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:42:13.300508  651659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:42:13.300762  651659 out.go:352] Setting JSON to false
	I0210 10:42:13.300831  651659 mustload.go:65] Loading cluster: ha-558727
	I0210 10:42:13.300926  651659 notify.go:220] Checking for updates...
	I0210 10:42:13.301315  651659 config.go:182] Loaded profile config "ha-558727": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 10:42:13.301357  651659 status.go:174] checking status of ha-558727 ...
	I0210 10:42:13.302223  651659 cli_runner.go:164] Run: docker container inspect ha-558727 --format={{.State.Status}}
	I0210 10:42:13.321122  651659 status.go:371] ha-558727 host status = "Stopped" (err=<nil>)
	I0210 10:42:13.321144  651659 status.go:384] host is not running, skipping remaining checks
	I0210 10:42:13.321163  651659 status.go:176] ha-558727 status: &{Name:ha-558727 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:42:13.321194  651659 status.go:174] checking status of ha-558727-m02 ...
	I0210 10:42:13.321501  651659 cli_runner.go:164] Run: docker container inspect ha-558727-m02 --format={{.State.Status}}
	I0210 10:42:13.344533  651659 status.go:371] ha-558727-m02 host status = "Stopped" (err=<nil>)
	I0210 10:42:13.344553  651659 status.go:384] host is not running, skipping remaining checks
	I0210 10:42:13.344559  651659 status.go:176] ha-558727-m02 status: &{Name:ha-558727-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:42:13.344579  651659 status.go:174] checking status of ha-558727-m04 ...
	I0210 10:42:13.344890  651659 cli_runner.go:164] Run: docker container inspect ha-558727-m04 --format={{.State.Status}}
	I0210 10:42:13.366082  651659 status.go:371] ha-558727-m04 host status = "Stopped" (err=<nil>)
	I0210 10:42:13.366106  651659 status.go:384] host is not running, skipping remaining checks
	I0210 10:42:13.366113  651659 status.go:176] ha-558727-m04 status: &{Name:ha-558727-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-558727 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0210 10:43:13.869331  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-558727 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m1.397157616s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (62.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-558727 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-558727 --control-plane -v=7 --alsologtostderr: (43.793723876s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-558727 status -v=7 --alsologtostderr: (1.260574705s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.046641934s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-708658 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0210 10:44:22.669464  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 10:44:50.377651  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-708658 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (46.526921834s)
--- PASS: TestJSONOutput/start/Command (46.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-708658 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-708658 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-708658 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-708658 --output=json --user=testUser: (5.772685488s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-563381 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-563381 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.498315ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7a93df67-0c87-4be5-8dbd-dcabee4a095b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-563381] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bd9906f-f2e3-4165-b40e-088ab11dd975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20385"}}
	{"specversion":"1.0","id":"d69e2491-8192-418e-b62c-ff252fa15148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8946abbb-a7b1-4f48-acf9-a7dba86d719e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig"}}
	{"specversion":"1.0","id":"5f7daac2-2405-43cd-ac65-e16b71bbcf82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube"}}
	{"specversion":"1.0","id":"91c2cbf0-ed8e-4b6a-91e3-95dcfa84e093","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7ece5392-c9d6-4193-80f6-badfc4d67df0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"09d1f2a5-c6a6-44c9-80ea-d1c2c633ee9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-563381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-563381
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-629743 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-629743 --network=: (35.948438545s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-629743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-629743
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-629743: (2.105039854s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-901620 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-901620 --network=bridge: (29.78873384s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-901620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-901620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-901620: (1.964359534s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.77s)

                                                
                                    
x
+
TestKicExistingNetwork (31.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0210 10:46:18.465897  581629 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0210 10:46:18.481774  581629 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0210 10:46:18.481862  581629 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0210 10:46:18.481885  581629 cli_runner.go:164] Run: docker network inspect existing-network
W0210 10:46:18.497916  581629 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0210 10:46:18.497956  581629 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0210 10:46:18.497970  581629 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0210 10:46:18.498761  581629 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 10:46:18.516512  581629 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37f7c82b9b3f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2a:78:ce:04} reservation:<nil>}
I0210 10:46:18.521778  581629 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0210 10:46:18.522245  581629 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e10170}
I0210 10:46:18.522280  581629 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0210 10:46:18.523048  581629 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0210 10:46:18.596882  581629 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-905094 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-905094 --network=existing-network: (29.529943422s)
helpers_test.go:175: Cleaning up "existing-network-905094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-905094
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-905094: (2.086127412s)
I0210 10:46:50.230118  581629 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.78s)

                                                
                                    
x
+
TestKicCustomSubnet (34.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-715156 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-715156 --subnet=192.168.60.0/24: (32.410632543s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-715156 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-715156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-715156
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-715156: (2.207589979s)
--- PASS: TestKicCustomSubnet (34.64s)

                                                
                                    
x
+
TestKicStaticIP (32.29s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-185780 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-185780 --static-ip=192.168.200.200: (30.038653034s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-185780 ip
helpers_test.go:175: Cleaning up "static-ip-185780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-185780
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-185780: (2.08243786s)
--- PASS: TestKicStaticIP (32.29s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-329259 --driver=docker  --container-runtime=containerd
E0210 10:48:13.869852  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-329259 --driver=docker  --container-runtime=containerd: (28.650774004s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-331934 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-331934 --driver=docker  --container-runtime=containerd: (32.919992223s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-329259
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-331934
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-331934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-331934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-331934: (2.058427862s)
helpers_test.go:175: Cleaning up "first-329259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-329259
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-329259: (2.232355667s)
--- PASS: TestMinikubeProfile (67.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-017047 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-017047 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.73504526s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-017047 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-019607 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-019607 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.51747071s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-019607 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-017047 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-017047 --alsologtostderr -v=5: (1.635958238s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-019607 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-019607
E0210 10:49:22.669851  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-019607: (1.212838197s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-019607
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-019607: (6.309065275s)
--- PASS: TestMountStart/serial/RestartStopped (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-019607 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255774 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0210 10:49:36.944849  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255774 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.117911315s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-255774 -- rollout status deployment/busybox: (13.605428484s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-f5bx4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-jz897 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-f5bx4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-jz897 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-f5bx4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-jz897 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-f5bx4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-f5bx4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-jz897 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-255774 -- exec busybox-58667487b6-jz897 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-255774 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-255774 -v 3 --alsologtostderr: (15.120302428s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-255774 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp testdata/cp-test.txt multinode-255774:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile238669388/001/cp-test_multinode-255774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774:/home/docker/cp-test.txt multinode-255774-m02:/home/docker/cp-test_multinode-255774_multinode-255774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m02 "sudo cat /home/docker/cp-test_multinode-255774_multinode-255774-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774:/home/docker/cp-test.txt multinode-255774-m03:/home/docker/cp-test_multinode-255774_multinode-255774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m03 "sudo cat /home/docker/cp-test_multinode-255774_multinode-255774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp testdata/cp-test.txt multinode-255774-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile238669388/001/cp-test_multinode-255774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774-m02:/home/docker/cp-test.txt multinode-255774:/home/docker/cp-test_multinode-255774-m02_multinode-255774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774 "sudo cat /home/docker/cp-test_multinode-255774-m02_multinode-255774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774-m02:/home/docker/cp-test.txt multinode-255774-m03:/home/docker/cp-test_multinode-255774-m02_multinode-255774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m03 "sudo cat /home/docker/cp-test_multinode-255774-m02_multinode-255774-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp testdata/cp-test.txt multinode-255774-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile238669388/001/cp-test_multinode-255774-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774-m03:/home/docker/cp-test.txt multinode-255774:/home/docker/cp-test_multinode-255774-m03_multinode-255774.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774 "sudo cat /home/docker/cp-test_multinode-255774-m03_multinode-255774.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 cp multinode-255774-m03:/home/docker/cp-test.txt multinode-255774-m02:/home/docker/cp-test_multinode-255774-m03_multinode-255774-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 ssh -n multinode-255774-m02 "sudo cat /home/docker/cp-test_multinode-255774-m03_multinode-255774-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-255774 node stop m03: (1.213987013s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255774 status: exit status 7 (530.01264ms)

                                                
                                                
-- stdout --
	multinode-255774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-255774-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-255774-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr: exit status 7 (505.436375ms)

                                                
                                                
-- stdout --
	multinode-255774
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-255774-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-255774-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:51:26.115986  705933 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:51:26.116270  705933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:51:26.116283  705933 out.go:358] Setting ErrFile to fd 2...
	I0210 10:51:26.116289  705933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:51:26.116557  705933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:51:26.116802  705933 out.go:352] Setting JSON to false
	I0210 10:51:26.116826  705933 mustload.go:65] Loading cluster: multinode-255774
	I0210 10:51:26.117262  705933 config.go:182] Loaded profile config "multinode-255774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 10:51:26.117285  705933 status.go:174] checking status of multinode-255774 ...
	I0210 10:51:26.117889  705933 cli_runner.go:164] Run: docker container inspect multinode-255774 --format={{.State.Status}}
	I0210 10:51:26.117976  705933 notify.go:220] Checking for updates...
	I0210 10:51:26.134479  705933 status.go:371] multinode-255774 host status = "Running" (err=<nil>)
	I0210 10:51:26.134502  705933 host.go:66] Checking if "multinode-255774" exists ...
	I0210 10:51:26.134786  705933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-255774
	I0210 10:51:26.151455  705933 host.go:66] Checking if "multinode-255774" exists ...
	I0210 10:51:26.151799  705933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:51:26.151847  705933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-255774
	I0210 10:51:26.180587  705933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33643 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/multinode-255774/id_rsa Username:docker}
	I0210 10:51:26.266830  705933 ssh_runner.go:195] Run: systemctl --version
	I0210 10:51:26.271211  705933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:51:26.282742  705933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 10:51:26.346323  705933 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-10 10:51:26.336174696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 10:51:26.346946  705933 kubeconfig.go:125] found "multinode-255774" server: "https://192.168.58.2:8443"
	I0210 10:51:26.346989  705933 api_server.go:166] Checking apiserver status ...
	I0210 10:51:26.347038  705933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 10:51:26.358951  705933 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	I0210 10:51:26.368260  705933 api_server.go:182] apiserver freezer: "12:freezer:/docker/08e698f911df8d6ca511d99c9bef1e1e3c3669244b7b3bb2a1eeec87010e3137/kubepods/burstable/pod83a2c9867f9550f2ceafdc1ff5c3e5b3/81080b2f9091eebca1cd52f0dd376f999c0f9d3b64aeb4c8458d06db10c6af86"
	I0210 10:51:26.368336  705933 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/08e698f911df8d6ca511d99c9bef1e1e3c3669244b7b3bb2a1eeec87010e3137/kubepods/burstable/pod83a2c9867f9550f2ceafdc1ff5c3e5b3/81080b2f9091eebca1cd52f0dd376f999c0f9d3b64aeb4c8458d06db10c6af86/freezer.state
	I0210 10:51:26.377057  705933 api_server.go:204] freezer state: "THAWED"
	I0210 10:51:26.377090  705933 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0210 10:51:26.385459  705933 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0210 10:51:26.385488  705933 status.go:463] multinode-255774 apiserver status = Running (err=<nil>)
	I0210 10:51:26.385501  705933 status.go:176] multinode-255774 status: &{Name:multinode-255774 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:51:26.385544  705933 status.go:174] checking status of multinode-255774-m02 ...
	I0210 10:51:26.385860  705933 cli_runner.go:164] Run: docker container inspect multinode-255774-m02 --format={{.State.Status}}
	I0210 10:51:26.403267  705933 status.go:371] multinode-255774-m02 host status = "Running" (err=<nil>)
	I0210 10:51:26.403295  705933 host.go:66] Checking if "multinode-255774-m02" exists ...
	I0210 10:51:26.403603  705933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-255774-m02
	I0210 10:51:26.420948  705933 host.go:66] Checking if "multinode-255774-m02" exists ...
	I0210 10:51:26.421257  705933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 10:51:26.421307  705933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-255774-m02
	I0210 10:51:26.440812  705933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33648 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/multinode-255774-m02/id_rsa Username:docker}
	I0210 10:51:26.526718  705933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 10:51:26.538529  705933 status.go:176] multinode-255774-m02 status: &{Name:multinode-255774-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:51:26.538561  705933 status.go:174] checking status of multinode-255774-m03 ...
	I0210 10:51:26.538874  705933 cli_runner.go:164] Run: docker container inspect multinode-255774-m03 --format={{.State.Status}}
	I0210 10:51:26.559556  705933 status.go:371] multinode-255774-m03 host status = "Stopped" (err=<nil>)
	I0210 10:51:26.559576  705933 status.go:384] host is not running, skipping remaining checks
	I0210 10:51:26.559583  705933 status.go:176] multinode-255774-m03 status: &{Name:multinode-255774-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-255774 node start m03 -v=7 --alsologtostderr: (8.767293961s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-255774
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-255774
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-255774: (24.90171s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255774 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255774 --wait=true -v=8 --alsologtostderr: (1m3.397978824s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-255774
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-255774 node delete m03: (4.669099351s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 stop
E0210 10:53:13.869754  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-255774 stop: (23.690744898s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255774 status: exit status 7 (101.412892ms)

                                                
                                                
-- stdout --
	multinode-255774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-255774-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr: exit status 7 (102.085077ms)

                                                
                                                
-- stdout --
	multinode-255774
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-255774-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:53:33.781414  713975 out.go:345] Setting OutFile to fd 1 ...
	I0210 10:53:33.781595  713975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:53:33.781604  713975 out.go:358] Setting ErrFile to fd 2...
	I0210 10:53:33.781610  713975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:53:33.781853  713975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 10:53:33.782049  713975 out.go:352] Setting JSON to false
	I0210 10:53:33.782102  713975 mustload.go:65] Loading cluster: multinode-255774
	I0210 10:53:33.782177  713975 notify.go:220] Checking for updates...
	I0210 10:53:33.783160  713975 config.go:182] Loaded profile config "multinode-255774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 10:53:33.783190  713975 status.go:174] checking status of multinode-255774 ...
	I0210 10:53:33.783735  713975 cli_runner.go:164] Run: docker container inspect multinode-255774 --format={{.State.Status}}
	I0210 10:53:33.803146  713975 status.go:371] multinode-255774 host status = "Stopped" (err=<nil>)
	I0210 10:53:33.803169  713975 status.go:384] host is not running, skipping remaining checks
	I0210 10:53:33.803177  713975 status.go:176] multinode-255774 status: &{Name:multinode-255774 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 10:53:33.803209  713975 status.go:174] checking status of multinode-255774-m02 ...
	I0210 10:53:33.803529  713975 cli_runner.go:164] Run: docker container inspect multinode-255774-m02 --format={{.State.Status}}
	I0210 10:53:33.825465  713975 status.go:371] multinode-255774-m02 host status = "Stopped" (err=<nil>)
	I0210 10:53:33.825491  713975 status.go:384] host is not running, skipping remaining checks
	I0210 10:53:33.825499  713975 status.go:176] multinode-255774-m02 status: &{Name:multinode-255774-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255774 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0210 10:54:22.669755  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255774 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.974567899s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-255774 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-255774
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255774-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-255774-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.740947ms)

                                                
                                                
-- stdout --
	* [multinode-255774-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-255774-m02' is duplicated with machine name 'multinode-255774-m02' in profile 'multinode-255774'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-255774-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-255774-m03 --driver=docker  --container-runtime=containerd: (30.576439298s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-255774
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-255774: exit status 80 (337.368958ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-255774 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-255774-m03 already exists in multinode-255774-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-255774-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-255774-m03: (2.427976478s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.50s)

                                                
                                    
x
+
TestPreload (122.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-720752 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0210 10:55:45.739153  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-720752 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.427683677s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-720752 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-720752 image pull gcr.io/k8s-minikube/busybox: (1.875335592s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-720752
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-720752: (12.097028272s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-720752 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-720752 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (18.98051189s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-720752 image list
helpers_test.go:175: Cleaning up "test-preload-720752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-720752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-720752: (2.378106976s)
--- PASS: TestPreload (122.10s)

                                                
                                    
x
+
TestScheduledStopUnix (104.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-906520 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-906520 --memory=2048 --driver=docker  --container-runtime=containerd: (29.148020233s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906520 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-906520 -n scheduled-stop-906520
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906520 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0210 10:57:36.832611  581629 retry.go:31] will retry after 114.535µs: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.833798  581629 retry.go:31] will retry after 211.492µs: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.834968  581629 retry.go:31] will retry after 148.446µs: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.836105  581629 retry.go:31] will retry after 394.549µs: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.837254  581629 retry.go:31] will retry after 544.81µs: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.838415  581629 retry.go:31] will retry after 643.731µs: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.839621  581629 retry.go:31] will retry after 1.191718ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.841863  581629 retry.go:31] will retry after 1.339321ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.844099  581629 retry.go:31] will retry after 3.059169ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.847256  581629 retry.go:31] will retry after 3.112729ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.851578  581629 retry.go:31] will retry after 6.188857ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.858783  581629 retry.go:31] will retry after 8.262929ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.868096  581629 retry.go:31] will retry after 8.578185ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.879954  581629 retry.go:31] will retry after 17.542452ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.898161  581629 retry.go:31] will retry after 41.123039ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
I0210 10:57:36.940416  581629 retry.go:31] will retry after 53.295104ms: open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/scheduled-stop-906520/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906520 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-906520 -n scheduled-stop-906520
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-906520
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-906520 --schedule 15s
E0210 10:58:13.869844  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-906520
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-906520: exit status 7 (68.53643ms)

                                                
                                                
-- stdout --
	scheduled-stop-906520
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-906520 -n scheduled-stop-906520
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-906520 -n scheduled-stop-906520: exit status 7 (70.428392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-906520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-906520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-906520: (4.032138566s)
--- PASS: TestScheduledStopUnix (104.83s)

                                                
                                    
x
+
TestInsufficientStorage (12.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-286764 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-286764 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.428976853s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ae7ac615-89fd-42b3-ad36-fa14b094c71b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-286764] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91faac45-f88d-4819-a4b2-0151c7bf08b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20385"}}
	{"specversion":"1.0","id":"99f8e730-29fd-444e-a1ab-39efacdd98e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa7eb1f4-848d-4e63-b0c7-82675a517504","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig"}}
	{"specversion":"1.0","id":"779c2a59-6450-49ef-b0ad-484590a8c9e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube"}}
	{"specversion":"1.0","id":"8c6c4e40-7412-40d2-8ee0-d9f99621a8c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"86e0f1f7-0f3f-49ae-aff0-60943ff1cd68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"209609c3-38c0-40b7-93d3-6e8d9f11bafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"79c7e58d-5565-4c09-b57d-e24ed15873a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1a4b0b8e-9269-44e2-b208-ae24e2478320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3db8aa72-375d-4094-9121-d78e9f71ae52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"842316e5-fc25-421b-8150-dbd9fea1b399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-286764\" primary control-plane node in \"insufficient-storage-286764\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"21528f63-46b2-490e-8f8e-906e6f8afcf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2415d387-dfb8-4cdb-9bec-b39d8395b462","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a257716c-059b-461e-a431-7c0522de1e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-286764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-286764 --output=json --layout=cluster: exit status 7 (279.026959ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-286764","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-286764","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 10:59:02.676795  732830 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-286764" does not appear in /home/jenkins/minikube-integration/20385-576242/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-286764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-286764 --output=json --layout=cluster: exit status 7 (295.618142ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-286764","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-286764","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 10:59:02.974041  732892 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-286764" does not appear in /home/jenkins/minikube-integration/20385-576242/kubeconfig
	E0210 10:59:02.984563  732892 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/insufficient-storage-286764/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-286764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-286764
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-286764: (1.935682224s)
--- PASS: TestInsufficientStorage (12.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1559359985 start -p running-upgrade-645273 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0210 11:04:22.669305  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1559359985 start -p running-upgrade-645273 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.709060818s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-645273 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-645273 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.01052183s)
helpers_test.go:175: Cleaning up "running-upgrade-645273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-645273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-645273: (2.42258513s)
--- PASS: TestRunningBinaryUpgrade (84.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.958267419s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-835449
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-835449: (1.348481261s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-835449 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-835449 status --format={{.Host}}: exit status 7 (105.086968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.030426293s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-835449 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (112.591349ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-835449] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-835449
	    minikube start -p kubernetes-upgrade-835449 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8354492 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-835449 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-835449 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.459814034s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-835449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-835449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-835449: (2.166452836s)
--- PASS: TestKubernetesUpgrade (349.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.875619278 start -p missing-upgrade-361559 --memory=2200 --driver=docker  --container-runtime=containerd
E0210 10:59:22.669680  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.875619278 start -p missing-upgrade-361559 --memory=2200 --driver=docker  --container-runtime=containerd: (1m39.832868316s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-361559
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-361559: (10.297161589s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-361559
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-361559 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-361559 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.710985438s)
helpers_test.go:175: Cleaning up "missing-upgrade-361559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-361559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-361559: (2.30988674s)
--- PASS: TestMissingContainerUpgrade (172.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-750447 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-750447 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (105.346068ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-750447] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-750447 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-750447 --driver=docker  --container-runtime=containerd: (38.890750943s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-750447 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-750447 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-750447 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.361169139s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-750447 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-750447 status -o json: exit status 2 (880.471273ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-750447","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-750447
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-750447: (2.972923279s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-750447 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-750447 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.740653833s)
--- PASS: TestNoKubernetes/serial/Start (5.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-750447 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-750447 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.858331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-750447
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-750447: (1.212914677s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-750447 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-750447 --driver=docker  --container-runtime=containerd: (6.502926332s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-750447 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-750447 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.625431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1227067902 start -p stopped-upgrade-635869 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1227067902 start -p stopped-upgrade-635869 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.085669426s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1227067902 -p stopped-upgrade-635869 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1227067902 -p stopped-upgrade-635869 stop: (20.09607706s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-635869 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0210 11:03:13.874754  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-635869 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.077875739s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-635869
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-635869: (1.175239913s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (69.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-997946 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-997946 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m9.862565567s)
--- PASS: TestPause/serial/Start (69.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-997946 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-997946 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.82929612s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.85s)

                                                
                                    
x
+
TestPause/serial/Pause (1.27s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-997946 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-997946 --alsologtostderr -v=5: (1.268800538s)
--- PASS: TestPause/serial/Pause (1.27s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-997946 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-997946 --output=json --layout=cluster: exit status 2 (472.524812ms)

                                                
                                                
-- stdout --
	{"Name":"pause-997946","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-997946","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-997946 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.25s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-997946 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-997946 --alsologtostderr -v=5: (1.245484181s)
--- PASS: TestPause/serial/PauseAgain (1.25s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-997946 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-997946 --alsologtostderr -v=5: (3.219726712s)
--- PASS: TestPause/serial/DeletePaused (3.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.03s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-997946
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-997946: exit status 1 (20.447008ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-997946: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-180674 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-180674 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (261.039421ms)

                                                
                                                
-- stdout --
	* [false-180674] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:06:47.885767  774663 out.go:345] Setting OutFile to fd 1 ...
	I0210 11:06:47.885993  774663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:06:47.886019  774663 out.go:358] Setting ErrFile to fd 2...
	I0210 11:06:47.886040  774663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:06:47.886316  774663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
	I0210 11:06:47.886783  774663 out.go:352] Setting JSON to false
	I0210 11:06:47.887856  774663 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13753,"bootTime":1739171855,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0210 11:06:47.887966  774663 start.go:139] virtualization:  
	I0210 11:06:47.891692  774663 out.go:177] * [false-180674] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0210 11:06:47.895306  774663 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:06:47.895374  774663 notify.go:220] Checking for updates...
	I0210 11:06:47.900760  774663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:06:47.903737  774663 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
	I0210 11:06:47.906889  774663 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
	I0210 11:06:47.909664  774663 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0210 11:06:47.912571  774663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:06:47.920377  774663 config.go:182] Loaded profile config "force-systemd-flag-929231": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 11:06:47.920563  774663 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:06:47.978617  774663 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 11:06:47.978744  774663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 11:06:48.048614  774663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-10 11:06:48.037988502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0210 11:06:48.048736  774663 docker.go:318] overlay module found
	I0210 11:06:48.051847  774663 out.go:177] * Using the docker driver based on user configuration
	I0210 11:06:48.054798  774663 start.go:297] selected driver: docker
	I0210 11:06:48.054817  774663 start.go:901] validating driver "docker" against <nil>
	I0210 11:06:48.054833  774663 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:06:48.058400  774663 out.go:201] 
	W0210 11:06:48.061182  774663 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0210 11:06:48.064397  774663 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-180674 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-180674" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-180674

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180674"

                                                
                                                
----------------------- debugLogs end: false-180674 [took: 4.667121129s] --------------------------------
helpers_test.go:175: Cleaning up "false-180674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-180674
--- PASS: TestNetworkPlugins/group/false (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0210 11:08:13.869773  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:09:22.670767  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m42.95319635s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-861376 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-861376 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m12.016901169s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-705847 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e750810e-d605-4ed6-906a-ec5cda381f90] Pending
helpers_test.go:344: "busybox" [e750810e-d605-4ed6-906a-ec5cda381f90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e750810e-d605-4ed6-906a-ec5cda381f90] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004401931s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-705847 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-705847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-705847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.602341597s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-705847 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-705847 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-705847 --alsologtostderr -v=3: (12.508775996s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-705847 -n old-k8s-version-705847
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-705847 -n old-k8s-version-705847: exit status 7 (111.879271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-705847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-861376 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1abfac55-8ffa-43e8-9938-790e7beca5f0] Pending
helpers_test.go:344: "busybox" [1abfac55-8ffa-43e8-9938-790e7beca5f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1abfac55-8ffa-43e8-9938-790e7beca5f0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003112661s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-861376 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-861376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-861376 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.30698868s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-861376 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-861376 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-861376 --alsologtostderr -v=3: (12.052269126s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-861376 -n no-preload-861376
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-861376 -n no-preload-861376: exit status 7 (96.379848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-861376 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-861376 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0210 11:12:25.741243  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:13:13.869458  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:14:22.670159  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-861376 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m26.540972709s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-861376 -n no-preload-861376
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jtdzn" [8b7c4592-6141-4de1-8556-a49265892633] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003591548s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jtdzn" [8b7c4592-6141-4de1-8556-a49265892633] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009361763s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-861376 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-861376 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-861376 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-861376 -n no-preload-861376
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-861376 -n no-preload-861376: exit status 2 (393.134073ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-861376 -n no-preload-861376
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-861376 -n no-preload-861376: exit status 2 (376.951939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-861376 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-861376 -n no-preload-861376
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-861376 -n no-preload-861376
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-822142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-822142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m12.31678349s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s9bfz" [cfb8e2db-d154-40a9-a6cd-761f2ec4dd52] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005484623s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s9bfz" [cfb8e2db-d154-40a9-a6cd-761f2ec4dd52] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004434764s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-705847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-705847 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-705847 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-705847 --alsologtostderr -v=1: (1.161893962s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-705847 -n old-k8s-version-705847
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-705847 -n old-k8s-version-705847: exit status 2 (370.178736ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-705847 -n old-k8s-version-705847
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-705847 -n old-k8s-version-705847: exit status 2 (372.543569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-705847 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-705847 -n old-k8s-version-705847
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-705847 -n old-k8s-version-705847
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-246255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0210 11:18:13.869153  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-246255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (54.914295365s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-822142 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c738192-0506-41ce-a080-7eb467acba29] Pending
helpers_test.go:344: "busybox" [4c738192-0506-41ce-a080-7eb467acba29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c738192-0506-41ce-a080-7eb467acba29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003326151s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-822142 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-822142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-822142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.37839877s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-822142 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-822142 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-822142 --alsologtostderr -v=3: (12.118289967s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-822142 -n embed-certs-822142
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-822142 -n embed-certs-822142: exit status 7 (79.20719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-822142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (289.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-822142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-822142 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m49.147407234s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-822142 -n embed-certs-822142
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (289.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-246255 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [63213b15-f01a-4d30-9bf9-7e86ae6b0232] Pending
helpers_test.go:344: "busybox" [63213b15-f01a-4d30-9bf9-7e86ae6b0232] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [63213b15-f01a-4d30-9bf9-7e86ae6b0232] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.033506835s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-246255 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-246255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-246255 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10383329s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-246255 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-246255 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-246255 --alsologtostderr -v=3: (12.119614615s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255: exit status 7 (82.698199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-246255 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-246255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0210 11:19:22.669900  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:50.870727  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:50.877079  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:50.888447  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:50.909923  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:50.951345  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:51.032903  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:51.194539  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:51.516110  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:52.158145  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:53.439802  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:20:56.002137  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:01.123933  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:11.365766  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:31.847648  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.182017  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.188413  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.199862  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.221313  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.262780  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.344313  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.505788  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:21:59.827543  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:00.469233  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:01.750669  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:04.312436  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:09.434872  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:12.808999  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:19.676836  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:40.158532  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:22:56.948066  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:13.869873  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/addons-624397/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:23:21.119998  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-246255 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m26.855644947s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-25xgs" [36b7fee6-67fa-4c46-87e4-0ec8035e0676] Running
E0210 11:23:34.731022  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003756812s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-25xgs" [36b7fee6-67fa-4c46-87e4-0ec8035e0676] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003488593s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-822142 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-p42v5" [3cc79eb7-3219-4489-9e57-1f6100358e6c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002942099s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-822142 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-822142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-822142 -n embed-certs-822142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-822142 -n embed-certs-822142: exit status 2 (335.577001ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-822142 -n embed-certs-822142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-822142 -n embed-certs-822142: exit status 2 (341.074664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-822142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-822142 -n embed-certs-822142
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-822142 -n embed-certs-822142
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-p42v5" [3cc79eb7-3219-4489-9e57-1f6100358e6c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009094898s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-246255 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-307760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-307760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (44.225514119s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-246255 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-246255 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-246255 --alsologtostderr -v=1: (1.039630389s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255: exit status 2 (395.946238ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255: exit status 2 (448.640295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-246255 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-246255 -n default-k8s-diff-port-246255
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0210 11:24:22.669918  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (59.977508139s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-307760 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-307760 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.849053627s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-307760 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-307760 --alsologtostderr -v=3: (1.369815463s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-307760 -n newest-cni-307760
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-307760 -n newest-cni-307760: exit status 7 (121.297429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-307760 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-307760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0210 11:24:43.044628  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-307760 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (17.123717591s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-307760 -n newest-cni-307760
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-307760 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-307760 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-307760 -n newest-cni-307760
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-307760 -n newest-cni-307760: exit status 2 (303.898494ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-307760 -n newest-cni-307760
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-307760 -n newest-cni-307760: exit status 2 (352.382075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-307760 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-307760 -n newest-cni-307760
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-307760 -n newest-cni-307760
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)
E0210 11:30:01.599543  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:01.606939  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:01.636420  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:01.657928  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:01.699639  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:01.781311  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:01.942811  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:02.264730  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:02.907060  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:04.188833  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:06.750531  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:11.872396  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:14.173867  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:30:22.113846  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/auto-180674/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-180674 "pgrep -a kubelet"
I0210 11:25:01.205085  581629 config.go:182] Loaded profile config "auto-180674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kpfzk" [4ef6ab6a-915c-4165-9171-a463e022f963] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-kpfzk" [4ef6ab6a-915c-4165-9171-a463e022f963] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003502342s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (57.642373809s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0210 11:25:50.870370  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.131895546s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4bgc2" [907b2f04-8ceb-4170-b7f2-c4984fcb8192] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003479276s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-180674 "pgrep -a kubelet"
I0210 11:26:06.316250  581629 config.go:182] Loaded profile config "kindnet-180674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t6jxm" [6f38a1a6-f2e9-482a-b193-f8f7f47f0e30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t6jxm" [6f38a1a6-f2e9-482a-b193-f8f7f47f0e30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004162734s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.916778748s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7qbmm" [1e21f7bc-2d63-4a85-a78c-5d6be342c154] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003615655s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-180674 "pgrep -a kubelet"
I0210 11:26:52.200612  581629 config.go:182] Loaded profile config "calico-180674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-72qgt" [b5204398-8fff-4e26-a835-6ff332b43c85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-72qgt" [b5204398-8fff-4e26-a835-6ff332b43c85] Running
E0210 11:26:59.182261  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/no-preload-861376/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00412506s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m15.512027599s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-180674 "pgrep -a kubelet"
I0210 11:27:36.664438  581629 config.go:182] Loaded profile config "custom-flannel-180674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-52vrv" [48a0867b-66e3-4da0-abde-5bb489712c8f] Pending
helpers_test.go:344: "netcat-5d86dc444-52vrv" [48a0867b-66e3-4da0-abde-5bb489712c8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004154551s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (52.900779213s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-180674 "pgrep -a kubelet"
I0210 11:28:44.739462  581629 config.go:182] Loaded profile config "enable-default-cni-180674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-btm6k" [edd073d0-9605-4063-b52d-b5245c6e7b6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-btm6k" [edd073d0-9605-4063-b52d-b5245c6e7b6f] Running
E0210 11:28:52.233639  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.239988  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.251356  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.272812  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.314843  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.396225  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.557703  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:52.879536  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
E0210 11:28:53.521648  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003957812s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0210 11:28:54.803271  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9k2jm" [eb1c5dac-307b-4140-8c58-4b393815921e] Running
E0210 11:29:12.729064  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/default-k8s-diff-port-246255/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006777794s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-180674 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k27dx" [f16e9491-77ef-425f-85dd-21034776cff6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k27dx" [f16e9491-77ef-425f-85dd-21034776cff6] Running
E0210 11:29:22.669920  581629 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/functional-388309/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003378721s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-180674 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.310311641s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-180674 "pgrep -a kubelet"
I0210 11:30:31.439272  581629 config.go:182] Loaded profile config "bridge-180674": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-180674 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tn8lx" [9d4ee6e2-e9f8-448a-a63f-dd78f5d44418] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tn8lx" [9d4ee6e2-e9f8-448a-a63f-dd78f5d44418] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004313239s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-180674 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-180674 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    

Test skip (30/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-847108 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-847108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-847108
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1804: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-616281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-616281
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-180674 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-180674" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:06:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-929231
contexts:
- context:
cluster: force-systemd-flag-929231
extensions:
- extension:
last-update: Mon, 10 Feb 2025 11:06:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: force-systemd-flag-929231
name: force-systemd-flag-929231
current-context: force-systemd-flag-929231
kind: Config
preferences: {}
users:
- name: force-systemd-flag-929231
user:
client-certificate: /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/force-systemd-flag-929231/client.crt
client-key: /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/force-systemd-flag-929231/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-180674

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180674"

                                                
                                                
----------------------- debugLogs end: kubenet-180674 [took: 5.424078315s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-180674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-180674
--- SKIP: TestNetworkPlugins/group/kubenet (5.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-180674 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-180674" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-180674

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-180674" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180674"

                                                
                                                
----------------------- debugLogs end: cilium-180674 [took: 6.006828369s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-180674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-180674
--- SKIP: TestNetworkPlugins/group/cilium (6.19s)

                                                
                                    
Copied to clipboard