Test Report: Docker_Linux_containerd_arm64 20045

                    
                      70ee1ceb4b2f7849aa4717a6092bbfa282d9029b:2024-12-05:37344
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 380.7
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1204 23:59:33.880519    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:59:45.366620    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.162435798s)

                                                
                                                
-- stdout --
	* [old-k8s-version-066167] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-066167" primary control-plane node in "old-k8s-version-066167" cluster
	* Pulling base image v0.0.45-1730888964-19917 ...
	* Restarting existing docker container for "old-k8s-version-066167" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-066167 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:58:52.147575  216030 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:58:52.147731  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:58:52.147745  216030 out.go:358] Setting ErrFile to fd 2...
	I1204 23:58:52.147750  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:58:52.148163  216030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:58:52.148653  216030 out.go:352] Setting JSON to false
	I1204 23:58:52.151539  216030 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6083,"bootTime":1733350650,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1204 23:58:52.151628  216030 start.go:139] virtualization:  
	I1204 23:58:52.155307  216030 out.go:177] * [old-k8s-version-066167] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1204 23:58:52.158922  216030 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:58:52.158998  216030 notify.go:220] Checking for updates...
	I1204 23:58:52.166845  216030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:58:52.169698  216030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:58:52.172369  216030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1204 23:58:52.175093  216030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1204 23:58:52.177697  216030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:58:52.180862  216030 config.go:182] Loaded profile config "old-k8s-version-066167": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1204 23:58:52.184261  216030 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 23:58:52.187008  216030 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:58:52.230847  216030 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:58:52.230955  216030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:58:52.318578  216030 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-04 23:58:52.309551212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:58:52.318686  216030 docker.go:318] overlay module found
	I1204 23:58:52.321786  216030 out.go:177] * Using the docker driver based on existing profile
	I1204 23:58:52.324627  216030 start.go:297] selected driver: docker
	I1204 23:58:52.324647  216030 start.go:901] validating driver "docker" against &{Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:58:52.324767  216030 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:58:52.325595  216030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:58:52.401669  216030 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-04 23:58:52.390292228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:58:52.402062  216030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:58:52.402096  216030 cni.go:84] Creating CNI manager for ""
	I1204 23:58:52.402147  216030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1204 23:58:52.402190  216030 start.go:340] cluster config:
	{Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:58:52.405202  216030 out.go:177] * Starting "old-k8s-version-066167" primary control-plane node in "old-k8s-version-066167" cluster
	I1204 23:58:52.407862  216030 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1204 23:58:52.410492  216030 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:58:52.413203  216030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1204 23:58:52.413264  216030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1204 23:58:52.413278  216030 cache.go:56] Caching tarball of preloaded images
	I1204 23:58:52.413365  216030 preload.go:172] Found /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 23:58:52.413381  216030 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1204 23:58:52.413502  216030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/config.json ...
	I1204 23:58:52.413721  216030 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:58:52.442308  216030 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1204 23:58:52.442328  216030 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1204 23:58:52.442348  216030 cache.go:194] Successfully downloaded all kic artifacts
	I1204 23:58:52.442380  216030 start.go:360] acquireMachinesLock for old-k8s-version-066167: {Name:mk44188120fe7b51da9a5c75c3fca881cdcbfcb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:58:52.442446  216030 start.go:364] duration metric: took 48.098µs to acquireMachinesLock for "old-k8s-version-066167"
	I1204 23:58:52.442468  216030 start.go:96] Skipping create...Using existing machine configuration
	I1204 23:58:52.442473  216030 fix.go:54] fixHost starting: 
	I1204 23:58:52.442722  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:58:52.467466  216030 fix.go:112] recreateIfNeeded on old-k8s-version-066167: state=Stopped err=<nil>
	W1204 23:58:52.467493  216030 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 23:58:52.470419  216030 out.go:177] * Restarting existing docker container for "old-k8s-version-066167" ...
	I1204 23:58:52.473331  216030 cli_runner.go:164] Run: docker start old-k8s-version-066167
	I1204 23:58:52.820226  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:58:52.849840  216030 kic.go:430] container "old-k8s-version-066167" state is running.
	I1204 23:58:52.850253  216030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-066167
	I1204 23:58:52.882551  216030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/config.json ...
	I1204 23:58:52.882768  216030 machine.go:93] provisionDockerMachine start ...
	I1204 23:58:52.882824  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:52.907667  216030 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:52.908110  216030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1204 23:58:52.908127  216030 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 23:58:52.908875  216030 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1204 23:58:56.037068  216030 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-066167
	
	I1204 23:58:56.037126  216030 ubuntu.go:169] provisioning hostname "old-k8s-version-066167"
	I1204 23:58:56.037221  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:56.060011  216030 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:56.060265  216030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1204 23:58:56.060283  216030 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-066167 && echo "old-k8s-version-066167" | sudo tee /etc/hostname
	I1204 23:58:56.217306  216030 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-066167
	
	I1204 23:58:56.217389  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:56.254523  216030 main.go:141] libmachine: Using SSH client type: native
	I1204 23:58:56.254790  216030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1204 23:58:56.254808  216030 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-066167' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-066167/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-066167' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:58:56.405168  216030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:58:56.405196  216030 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-2283/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-2283/.minikube}
	I1204 23:58:56.405219  216030 ubuntu.go:177] setting up certificates
	I1204 23:58:56.405229  216030 provision.go:84] configureAuth start
	I1204 23:58:56.405296  216030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-066167
	I1204 23:58:56.424866  216030 provision.go:143] copyHostCerts
	I1204 23:58:56.424939  216030 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem, removing ...
	I1204 23:58:56.424952  216030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem
	I1204 23:58:56.425031  216030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem (1082 bytes)
	I1204 23:58:56.425175  216030 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem, removing ...
	I1204 23:58:56.425182  216030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem
	I1204 23:58:56.425212  216030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem (1123 bytes)
	I1204 23:58:56.425276  216030 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem, removing ...
	I1204 23:58:56.425281  216030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem
	I1204 23:58:56.425305  216030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem (1679 bytes)
	I1204 23:58:56.425361  216030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-066167 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-066167]
	I1204 23:58:57.214859  216030 provision.go:177] copyRemoteCerts
	I1204 23:58:57.214980  216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:58:57.215054  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:57.234639  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:58:57.326684  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:58:57.353250  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1204 23:58:57.379765  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 23:58:57.406070  216030 provision.go:87] duration metric: took 1.000826036s to configureAuth
	I1204 23:58:57.406099  216030 ubuntu.go:193] setting minikube options for container-runtime
	I1204 23:58:57.406278  216030 config.go:182] Loaded profile config "old-k8s-version-066167": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1204 23:58:57.406292  216030 machine.go:96] duration metric: took 4.523516097s to provisionDockerMachine
	I1204 23:58:57.406300  216030 start.go:293] postStartSetup for "old-k8s-version-066167" (driver="docker")
	I1204 23:58:57.406311  216030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:58:57.406375  216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:58:57.406422  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:57.430782  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:58:57.523014  216030 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:58:57.526685  216030 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1204 23:58:57.526723  216030 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1204 23:58:57.526733  216030 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1204 23:58:57.526741  216030 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1204 23:58:57.526754  216030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/addons for local assets ...
	I1204 23:58:57.526812  216030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/files for local assets ...
	I1204 23:58:57.526906  216030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem -> 77362.pem in /etc/ssl/certs
	I1204 23:58:57.527017  216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 23:58:57.536555  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /etc/ssl/certs/77362.pem (1708 bytes)
	I1204 23:58:57.562577  216030 start.go:296] duration metric: took 156.261441ms for postStartSetup
	I1204 23:58:57.562661  216030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:58:57.562712  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:57.580758  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:58:57.667417  216030 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1204 23:58:57.672629  216030 fix.go:56] duration metric: took 5.23014743s for fixHost
	I1204 23:58:57.672651  216030 start.go:83] releasing machines lock for "old-k8s-version-066167", held for 5.230196069s
	I1204 23:58:57.672722  216030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-066167
	I1204 23:58:57.691473  216030 ssh_runner.go:195] Run: cat /version.json
	I1204 23:58:57.691546  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:57.691794  216030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:58:57.691895  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:58:57.722536  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:58:57.731869  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:58:57.816677  216030 ssh_runner.go:195] Run: systemctl --version
	I1204 23:58:57.961593  216030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 23:58:57.966186  216030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1204 23:58:57.990433  216030 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1204 23:58:57.990516  216030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:58:57.999908  216030 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 23:58:57.999932  216030 start.go:495] detecting cgroup driver to use...
	I1204 23:58:57.999962  216030 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1204 23:58:58.000016  216030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 23:58:58.015744  216030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 23:58:58.031207  216030 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:58:58.031273  216030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:58:58.046736  216030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:58:58.061583  216030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:58:58.177083  216030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:58:58.268488  216030 docker.go:233] disabling docker service ...
	I1204 23:58:58.268556  216030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:58:58.285059  216030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:58:58.297197  216030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:58:58.418211  216030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:58:58.540390  216030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:58:58.554118  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:58:58.571707  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1204 23:58:58.582154  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 23:58:58.592638  216030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 23:58:58.592707  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 23:58:58.603254  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 23:58:58.613372  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 23:58:58.623832  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 23:58:58.634238  216030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:58:58.643764  216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 23:58:58.654770  216030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:58:58.664172  216030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:58:58.673293  216030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:58.773943  216030 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 23:58:58.984351  216030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1204 23:58:58.984465  216030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1204 23:58:58.988060  216030 start.go:563] Will wait 60s for crictl version
	I1204 23:58:58.988165  216030 ssh_runner.go:195] Run: which crictl
	I1204 23:58:58.991891  216030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:58:59.057947  216030 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1204 23:58:59.058086  216030 ssh_runner.go:195] Run: containerd --version
	I1204 23:58:59.079788  216030 ssh_runner.go:195] Run: containerd --version
	I1204 23:58:59.105449  216030 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1204 23:58:59.107155  216030 cli_runner.go:164] Run: docker network inspect old-k8s-version-066167 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:58:59.120879  216030 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1204 23:58:59.124683  216030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:59.135009  216030 kubeadm.go:883] updating cluster {Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:58:59.135126  216030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1204 23:58:59.135183  216030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:58:59.190673  216030 containerd.go:627] all images are preloaded for containerd runtime.
	I1204 23:58:59.190693  216030 containerd.go:534] Images already preloaded, skipping extraction
	I1204 23:58:59.190752  216030 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:58:59.234335  216030 containerd.go:627] all images are preloaded for containerd runtime.
	I1204 23:58:59.234408  216030 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:58:59.234428  216030 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1204 23:58:59.234562  216030 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-066167 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:58:59.234647  216030 ssh_runner.go:195] Run: sudo crictl info
	I1204 23:58:59.280713  216030 cni.go:84] Creating CNI manager for ""
	I1204 23:58:59.280739  216030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1204 23:58:59.280750  216030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:58:59.280769  216030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-066167 NodeName:old-k8s-version-066167 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1204 23:58:59.280909  216030 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-066167"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:58:59.280978  216030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1204 23:58:59.289856  216030 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:58:59.289968  216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 23:58:59.298778  216030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1204 23:58:59.316060  216030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:58:59.333738  216030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1204 23:58:59.350993  216030 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1204 23:58:59.354415  216030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:58:59.364450  216030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:58:59.468283  216030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:58:59.482569  216030 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167 for IP: 192.168.76.2
	I1204 23:58:59.482640  216030 certs.go:194] generating shared ca certs ...
	I1204 23:58:59.482669  216030 certs.go:226] acquiring lock for ca certs: {Name:mk1d98569ca320b9ee7e00b709eb6c9a159130d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:58:59.482853  216030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key
	I1204 23:58:59.482921  216030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key
	I1204 23:58:59.482942  216030 certs.go:256] generating profile certs ...
	I1204 23:58:59.483058  216030 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.key
	I1204 23:58:59.483142  216030 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/apiserver.key.e0d61a35
	I1204 23:58:59.483217  216030 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/proxy-client.key
	I1204 23:58:59.483379  216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem (1338 bytes)
	W1204 23:58:59.483432  216030 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736_empty.pem, impossibly tiny 0 bytes
	I1204 23:58:59.483455  216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 23:58:59.483509  216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:58:59.483557  216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:58:59.483611  216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem (1679 bytes)
	I1204 23:58:59.483685  216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem (1708 bytes)
	I1204 23:58:59.484366  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:58:59.515037  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 23:58:59.555595  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:58:59.603752  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1204 23:58:59.687048  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 23:58:59.717677  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 23:58:59.743289  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:58:59.767997  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 23:58:59.793530  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /usr/share/ca-certificates/77362.pem (1708 bytes)
	I1204 23:58:59.819454  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:58:59.845346  216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem --> /usr/share/ca-certificates/7736.pem (1338 bytes)
	I1204 23:58:59.872268  216030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:58:59.891781  216030 ssh_runner.go:195] Run: openssl version
	I1204 23:58:59.897699  216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77362.pem && ln -fs /usr/share/ca-certificates/77362.pem /etc/ssl/certs/77362.pem"
	I1204 23:58:59.907707  216030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77362.pem
	I1204 23:58:59.911602  216030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:19 /usr/share/ca-certificates/77362.pem
	I1204 23:58:59.911715  216030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77362.pem
	I1204 23:58:59.918842  216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77362.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 23:58:59.928547  216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:58:59.938897  216030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:59.942511  216030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:11 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:59.942620  216030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:58:59.949635  216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:58:59.958884  216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7736.pem && ln -fs /usr/share/ca-certificates/7736.pem /etc/ssl/certs/7736.pem"
	I1204 23:58:59.968322  216030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7736.pem
	I1204 23:58:59.972166  216030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:19 /usr/share/ca-certificates/7736.pem
	I1204 23:58:59.972284  216030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7736.pem
	I1204 23:58:59.979460  216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7736.pem /etc/ssl/certs/51391683.0"
	I1204 23:58:59.988896  216030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:58:59.992673  216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 23:58:59.999736  216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 23:59:00.006994  216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 23:59:00.014914  216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 23:59:00.023473  216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 23:59:00.032184  216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 23:59:00.040698  216030 kubeadm.go:392] StartCluster: {Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:59:00.040874  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1204 23:59:00.040990  216030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:59:00.143884  216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1204 23:59:00.144254  216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1204 23:59:00.144283  216030 cri.go:89] found id: "784529bd212fc0a79c877ec4e2c6446e0ea31c9805d13332863fc4f0e39cf480"
	I1204 23:59:00.144322  216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1204 23:59:00.144339  216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1204 23:59:00.144361  216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1204 23:59:00.144381  216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1204 23:59:00.144408  216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1204 23:59:00.144434  216030 cri.go:89] found id: ""
	I1204 23:59:00.144538  216030 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1204 23:59:00.164485  216030 cri.go:116] JSON = null
	W1204 23:59:00.164595  216030 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1204 23:59:00.164729  216030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:59:00.178310  216030 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 23:59:00.178397  216030 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 23:59:00.178489  216030 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 23:59:00.191190  216030 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 23:59:00.191806  216030 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-066167" does not appear in /home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:59:00.192040  216030 kubeconfig.go:62] /home/jenkins/minikube-integration/20045-2283/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-066167" cluster setting kubeconfig missing "old-k8s-version-066167" context setting]
	I1204 23:59:00.192450  216030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:59:00.194567  216030 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 23:59:00.206583  216030 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1204 23:59:00.206671  216030 kubeadm.go:597] duration metric: took 28.252334ms to restartPrimaryControlPlane
	I1204 23:59:00.206700  216030 kubeadm.go:394] duration metric: took 166.012363ms to StartCluster
	I1204 23:59:00.206746  216030 settings.go:142] acquiring lock: {Name:mkf88c0c5090e30b7bb8c2e4a8e4f7c9dd68316c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:59:00.206971  216030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:59:00.207718  216030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:59:00.208137  216030 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1204 23:59:00.208652  216030 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 23:59:00.208774  216030 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-066167"
	I1204 23:59:00.208798  216030 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-066167"
	W1204 23:59:00.208812  216030 addons.go:243] addon storage-provisioner should already be in state true
	I1204 23:59:00.208841  216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
	I1204 23:59:00.209026  216030 config.go:182] Loaded profile config "old-k8s-version-066167": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1204 23:59:00.209169  216030 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-066167"
	I1204 23:59:00.209214  216030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-066167"
	I1204 23:59:00.209377  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:59:00.209578  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:59:00.217218  216030 addons.go:69] Setting dashboard=true in profile "old-k8s-version-066167"
	I1204 23:59:00.217258  216030 addons.go:234] Setting addon dashboard=true in "old-k8s-version-066167"
	W1204 23:59:00.217267  216030 addons.go:243] addon dashboard should already be in state true
	I1204 23:59:00.217305  216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
	I1204 23:59:00.217806  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:59:00.218031  216030 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-066167"
	I1204 23:59:00.218066  216030 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-066167"
	W1204 23:59:00.218105  216030 addons.go:243] addon metrics-server should already be in state true
	I1204 23:59:00.218164  216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
	I1204 23:59:00.218695  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:59:00.225241  216030 out.go:177] * Verifying Kubernetes components...
	I1204 23:59:00.226609  216030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:59:00.277895  216030 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1204 23:59:00.279274  216030 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1204 23:59:00.281076  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1204 23:59:00.283039  216030 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1204 23:59:00.283150  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:59:00.302109  216030 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:59:00.304710  216030 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-066167"
	W1204 23:59:00.304744  216030 addons.go:243] addon default-storageclass should already be in state true
	I1204 23:59:00.304773  216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
	I1204 23:59:00.305336  216030 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:00.305357  216030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:59:00.305433  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:59:00.309401  216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
	I1204 23:59:00.323428  216030 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1204 23:59:00.326473  216030 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 23:59:00.326509  216030 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 23:59:00.326640  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:59:00.377541  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:59:00.394965  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:59:00.404504  216030 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:59:00.404526  216030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:59:00.404603  216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
	I1204 23:59:00.415685  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:59:00.435901  216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
	I1204 23:59:00.526674  216030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:59:00.571389  216030 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-066167" to be "Ready" ...
	I1204 23:59:00.593433  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1204 23:59:00.593454  216030 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1204 23:59:00.627958  216030 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 23:59:00.628029  216030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1204 23:59:00.652051  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1204 23:59:00.652123  216030 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1204 23:59:00.672452  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:00.679452  216030 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 23:59:00.679521  216030 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 23:59:00.695610  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:59:00.723528  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1204 23:59:00.723600  216030 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1204 23:59:00.751402  216030 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:00.751476  216030 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 23:59:00.799496  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1204 23:59:00.799569  216030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1204 23:59:00.860187  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:00.921012  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1204 23:59:00.921086  216030 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1204 23:59:01.024288  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.024379  216030 retry.go:31] will retry after 312.184752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:01.034387  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.034488  216030 retry.go:31] will retry after 322.762797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.040090  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1204 23:59:01.040163  216030 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1204 23:59:01.093721  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.093757  216030 retry.go:31] will retry after 244.927607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.095596  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1204 23:59:01.095619  216030 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1204 23:59:01.128097  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1204 23:59:01.128173  216030 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1204 23:59:01.151401  216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 23:59:01.151431  216030 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1204 23:59:01.181872  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 23:59:01.337564  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:01.339384  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:01.358024  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1204 23:59:01.400453  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.400483  216030 retry.go:31] will retry after 178.135322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.579375  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1204 23:59:01.913184  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.913224  216030 retry.go:31] will retry after 518.189037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:01.946000  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:01.946032  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.946053  216030 retry.go:31] will retry after 315.867414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.946061  216030 retry.go:31] will retry after 219.565848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:01.988491  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:01.988520  216030 retry.go:31] will retry after 309.910603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.166270  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:59:02.263109  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:02.298783  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1204 23:59:02.358215  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.358257  216030 retry.go:31] will retry after 361.560544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.432499  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:02.571825  216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
	I1204 23:59:02.720033  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1204 23:59:02.735767  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.735801  216030 retry.go:31] will retry after 418.341804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:02.812590  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.812626  216030 retry.go:31] will retry after 488.130366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:02.828249  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.828285  216030 retry.go:31] will retry after 310.105415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:02.880346  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:02.880374  216030 retry.go:31] will retry after 770.762768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:03.139297  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:03.154559  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:03.300886  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1204 23:59:03.307845  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:03.307879  216030 retry.go:31] will retry after 800.137456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:03.373919  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:03.373952  216030 retry.go:31] will retry after 1.120090819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:03.441590  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:03.441626  216030 retry.go:31] will retry after 625.533972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:03.651608  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1204 23:59:03.811647  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:03.811681  216030 retry.go:31] will retry after 943.564938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.068147  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 23:59:04.108975  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1204 23:59:04.173123  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.173162  216030 retry.go:31] will retry after 1.270498363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:04.243218  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.243255  216030 retry.go:31] will retry after 1.522887692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.494730  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:04.572350  216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
	W1204 23:59:04.597940  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.598023  216030 retry.go:31] will retry after 1.26879485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.756242  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1204 23:59:04.877238  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:04.877264  216030 retry.go:31] will retry after 2.106404771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:05.444487  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1204 23:59:05.544370  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:05.544407  216030 retry.go:31] will retry after 2.39631732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:05.767291  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:05.867090  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1204 23:59:05.867237  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:05.867265  216030 retry.go:31] will retry after 1.509553348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1204 23:59:05.975838  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:05.975880  216030 retry.go:31] will retry after 2.49774844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:06.572434  216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
	I1204 23:59:06.983846  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1204 23:59:07.082591  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:07.082621  216030 retry.go:31] will retry after 1.712553314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:07.377466  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1204 23:59:07.479578  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:07.479613  216030 retry.go:31] will retry after 2.677258788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:07.941852  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1204 23:59:08.042298  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:08.042337  216030 retry.go:31] will retry after 2.646781732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:08.474286  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1204 23:59:08.583037  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:08.583067  216030 retry.go:31] will retry after 1.540189467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:08.795392  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1204 23:59:08.986708  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:08.986740  216030 retry.go:31] will retry after 2.631574868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1204 23:59:09.072402  216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
	I1204 23:59:10.124154  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:10.157411  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:10.689511  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 23:59:11.619306  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:59:20.572791  216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": net/http: TLS handshake timeout
	I1204 23:59:20.902135  216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.777935896s)
	W1204 23:59:20.902177  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1204 23:59:20.902197  216030 retry.go:31] will retry after 4.180913506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1204 23:59:20.905945  216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.748501717s)
	W1204 23:59:20.905977  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1204 23:59:20.905992  216030 retry.go:31] will retry after 3.572493709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1204 23:59:21.161741  216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.472183353s)
	W1204 23:59:21.161778  216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1204 23:59:21.161797  216030 retry.go:31] will retry after 5.277949957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1204 23:59:21.952838  216030 node_ready.go:49] node "old-k8s-version-066167" has status "Ready":"True"
	I1204 23:59:21.952862  216030 node_ready.go:38] duration metric: took 21.381445101s for node "old-k8s-version-066167" to be "Ready" ...
	I1204 23:59:21.952873  216030 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:59:22.213039  216030 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-vb8kf" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:22.567414  216030 pod_ready.go:93] pod "coredns-74ff55c5b-vb8kf" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:22.567489  216030 pod_ready.go:82] duration metric: took 354.337475ms for pod "coredns-74ff55c5b-vb8kf" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:22.567517  216030 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:22.610199  216030 pod_ready.go:93] pod "etcd-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:22.610268  216030 pod_ready.go:82] duration metric: took 42.731173ms for pod "etcd-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:22.610309  216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:23.066631  216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.447288087s)
	I1204 23:59:24.479125  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:59:24.626687  216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:25.083957  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:59:26.150754  216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671589784s)
	I1204 23:59:26.333320  216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249315574s)
	I1204 23:59:26.333419  216030 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-066167"
	I1204 23:59:26.440433  216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1204 23:59:26.925999  216030 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-066167 addons enable metrics-server
	
	I1204 23:59:26.928483  216030 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1204 23:59:26.930938  216030 addons.go:510] duration metric: took 26.722286953s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1204 23:59:27.116187  216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:29.117090  216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:31.136563  216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:32.116916  216030 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
	I1204 23:59:32.116941  216030 pod_ready.go:82] duration metric: took 9.506606364s for pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:32.116955  216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1204 23:59:34.123385  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:36.124414  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:38.622468  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:41.134741  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:43.628217  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:46.129268  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:48.623824  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:51.129971  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:53.622941  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:55.623333  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1204 23:59:58.123609  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:00.155979  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:02.624179  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:05.124373  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:07.625365  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:10.124525  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:12.623285  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:15.124225  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:17.124364  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:19.124471  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:21.124677  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:23.624178  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:26.123881  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:28.141044  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:30.641472  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:33.124970  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:35.125567  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:37.125828  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:39.622706  216030 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:39.622731  216030 pod_ready.go:82] duration metric: took 1m7.50576737s for pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:39.622744  216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:39.627598  216030 pod_ready.go:93] pod "kube-proxy-xh97b" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:39.627663  216030 pod_ready.go:82] duration metric: took 4.909057ms for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:39.627682  216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:41.635075  216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:44.133262  216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:45.634685  216030 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:45.634754  216030 pod_ready.go:82] duration metric: took 6.007062956s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:45.634781  216030 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:47.641160  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:50.142040  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:52.640397  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:54.641624  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:57.141636  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:59.640966  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:01.641368  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:04.141819  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:06.641245  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:08.643778  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:11.142085  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:13.142210  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:15.143248  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:17.640366  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:19.642863  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:22.141401  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:24.141731  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:26.640254  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:29.141453  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:31.640747  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:33.640815  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:35.640860  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:37.641357  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:40.141576  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:42.142551  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:44.640790  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:46.640978  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:49.140930  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:51.640575  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:54.141681  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:56.640948  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:58.641251  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:01.140947  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:03.141906  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:05.641253  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:08.140771  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:10.141503  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:12.141977  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:14.640789  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:16.640823  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:18.641073  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:20.641191  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:22.641262  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:25.142092  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:27.142352  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:29.640668  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:32.144704  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:34.641267  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:37.141776  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:39.640880  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:42.143365  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:44.641367  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:47.141280  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:49.141788  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:51.141822  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:53.186734  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:55.641386  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:58.141377  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:00.190535  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:02.640575  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:04.641218  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:07.141448  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:09.142518  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:11.646299  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:14.140064  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:16.141711  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:18.640395  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:20.641321  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:23.141469  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:25.142104  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:27.641721  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:30.141599  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:32.640894  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:34.641205  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:37.141279  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:39.141499  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:41.141843  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:43.142457  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:45.642402  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:48.141074  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:50.640840  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:52.640941  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:55.142516  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:57.641042  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:00.258272  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:02.640707  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:04.640786  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:06.640980  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:08.641054  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:11.146089  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:13.640923  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:16.141477  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:18.641364  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:21.154913  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:23.640479  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:25.641079  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:27.642694  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:30.141328  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:32.142061  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:34.646681  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:37.141273  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:39.142582  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:41.641272  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:44.154672  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:45.641935  216030 pod_ready.go:82] duration metric: took 4m0.007127886s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
	E1205 00:04:45.641961  216030 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 00:04:45.641970  216030 pod_ready.go:39] duration metric: took 5m23.689087349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:04:45.641984  216030 api_server.go:52] waiting for apiserver process to appear ...
	I1205 00:04:45.642014  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:04:45.642080  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:04:45.701396  216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:45.701417  216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:45.701422  216030 cri.go:89] found id: ""
	I1205 00:04:45.701428  216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
	I1205 00:04:45.701487  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.706274  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.709870  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 00:04:45.709950  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:04:45.752726  216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:45.752759  216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:45.752764  216030 cri.go:89] found id: ""
	I1205 00:04:45.752771  216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
	I1205 00:04:45.752844  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.756595  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.759984  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 00:04:45.760054  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:04:45.802699  216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:45.802722  216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:45.802733  216030 cri.go:89] found id: ""
	I1205 00:04:45.802741  216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
	I1205 00:04:45.802798  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.806565  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.810357  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:04:45.810434  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:04:45.853797  216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:45.853818  216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:45.853823  216030 cri.go:89] found id: ""
	I1205 00:04:45.853832  216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
	I1205 00:04:45.853889  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.857263  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.862164  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:04:45.862243  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:04:45.902320  216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:45.902409  216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:45.902423  216030 cri.go:89] found id: ""
	I1205 00:04:45.902431  216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
	I1205 00:04:45.902501  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.906129  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.909489  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:04:45.909590  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:04:45.951353  216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:45.951376  216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:45.951381  216030 cri.go:89] found id: ""
	I1205 00:04:45.951388  216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
	I1205 00:04:45.951449  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.955123  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.958548  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 00:04:45.958621  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:04:46.013456  216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:46.013484  216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:46.013489  216030 cri.go:89] found id: ""
	I1205 00:04:46.013497  216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
	I1205 00:04:46.013620  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.018166  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.022058  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 00:04:46.022188  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 00:04:46.071154  216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:46.071186  216030 cri.go:89] found id: ""
	I1205 00:04:46.071195  216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
	I1205 00:04:46.071278  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.075279  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 00:04:46.075401  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 00:04:46.115487  216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:46.115560  216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:46.115580  216030 cri.go:89] found id: ""
	I1205 00:04:46.115593  216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
	I1205 00:04:46.115669  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.119363  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.122924  216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
	I1205 00:04:46.122956  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:46.164473  216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
	I1205 00:04:46.164503  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:46.219238  216030 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:04:46.219270  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 00:04:46.367441  216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
	I1205 00:04:46.367470  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:46.406779  216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
	I1205 00:04:46.406805  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:46.454765  216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
	I1205 00:04:46.454792  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:46.498510  216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
	I1205 00:04:46.498538  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:46.537447  216030 logs.go:123] Gathering logs for containerd ...
	I1205 00:04:46.537476  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 00:04:46.617148  216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
	I1205 00:04:46.617196  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:46.667834  216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
	I1205 00:04:46.667985  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:46.732274  216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
	I1205 00:04:46.732303  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:46.792624  216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
	I1205 00:04:46.792656  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:46.830707  216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
	I1205 00:04:46.830736  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:46.875737  216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
	I1205 00:04:46.875769  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:46.960343  216030 logs.go:123] Gathering logs for dmesg ...
	I1205 00:04:46.960376  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:04:46.978879  216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
	I1205 00:04:46.978908  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:47.043184  216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
	I1205 00:04:47.043220  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:47.095108  216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
	I1205 00:04:47.095137  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:47.138073  216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
	I1205 00:04:47.138112  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:47.200917  216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
	I1205 00:04:47.200959  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:47.290017  216030 logs.go:123] Gathering logs for container status ...
	I1205 00:04:47.290077  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 00:04:47.355835  216030 logs.go:123] Gathering logs for kubelet ...
	I1205 00:04:47.355861  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 00:04:47.415957  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161     664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:47.416229  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640     664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:47.416462  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:47.422607  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.422894  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.425873  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.427977  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.428166  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.428495  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.429164  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.429606  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800     664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
	W1205 00:04:47.432365  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.432950  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.433315  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.433645  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.433831  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.434156  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.434742  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.437168  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.437499  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.437686  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.438017  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.438200  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.438538  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.439120  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.439303  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.439631  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.439814  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.440143  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.440328  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.440657  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.441030  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.443463  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.443782  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.443980  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.444164  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.444490  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.444674  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.445266  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.445596  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.445781  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.446106  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.446289  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.446622  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.446806  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.447136  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.447319  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.447646  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.447972  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.448154  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.448468  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.448666  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.448992  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.449185  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.449511  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.449719  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.450052  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.450238  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1205 00:04:47.450255  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:47.450266  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 00:04:47.450326  216030 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 00:04:47.450338  216030 out.go:270]   Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.450346  216030 out.go:270]   Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	  Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.450353  216030 out.go:270]   Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.450362  216030 out.go:270]   Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	  Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.450370  216030 out.go:270]   Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1205 00:04:47.450378  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:47.450384  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:04:57.451637  216030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:04:57.463556  216030 api_server.go:72] duration metric: took 5m57.25534682s to wait for apiserver process to appear ...
	I1205 00:04:57.463582  216030 api_server.go:88] waiting for apiserver healthz status ...
	I1205 00:04:57.463617  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:04:57.463679  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:04:57.502613  216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:57.502634  216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:57.502639  216030 cri.go:89] found id: ""
	I1205 00:04:57.502646  216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
	I1205 00:04:57.502706  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.506578  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.510329  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 00:04:57.510403  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:04:57.549412  216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:57.549434  216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:57.549439  216030 cri.go:89] found id: ""
	I1205 00:04:57.549446  216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
	I1205 00:04:57.549522  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.553176  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.556561  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 00:04:57.556630  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:04:57.606322  216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:57.606344  216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:57.606349  216030 cri.go:89] found id: ""
	I1205 00:04:57.606356  216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
	I1205 00:04:57.606414  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.610546  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.614234  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:04:57.614302  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:04:57.657522  216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:57.657543  216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:57.657549  216030 cri.go:89] found id: ""
	I1205 00:04:57.657556  216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
	I1205 00:04:57.657619  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.661379  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.664752  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:04:57.664830  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:04:57.712770  216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:57.712861  216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:57.712880  216030 cri.go:89] found id: ""
	I1205 00:04:57.712898  216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
	I1205 00:04:57.712996  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.717580  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.721738  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:04:57.721819  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:04:57.759280  216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:57.759302  216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:57.759307  216030 cri.go:89] found id: ""
	I1205 00:04:57.759314  216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
	I1205 00:04:57.759371  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.763240  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.766739  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 00:04:57.766823  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:04:57.804341  216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:57.804366  216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:57.804372  216030 cri.go:89] found id: ""
	I1205 00:04:57.804379  216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
	I1205 00:04:57.804439  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.808307  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.811971  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 00:04:57.812044  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 00:04:57.865535  216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:57.865556  216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:57.865561  216030 cri.go:89] found id: ""
	I1205 00:04:57.865568  216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
	I1205 00:04:57.865627  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.869504  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.872895  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 00:04:57.873022  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 00:04:57.913425  216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:57.913449  216030 cri.go:89] found id: ""
	I1205 00:04:57.913463  216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
	I1205 00:04:57.913526  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.917503  216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
	I1205 00:04:57.917529  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:57.959718  216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
	I1205 00:04:57.959742  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:58.030401  216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
	I1205 00:04:58.030436  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:58.089905  216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
	I1205 00:04:58.089933  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:58.129773  216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
	I1205 00:04:58.129861  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:58.170834  216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
	I1205 00:04:58.170863  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:58.217420  216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
	I1205 00:04:58.217449  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:58.264707  216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
	I1205 00:04:58.264735  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:58.314661  216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
	I1205 00:04:58.314686  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:58.372507  216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
	I1205 00:04:58.372541  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:58.414881  216030 logs.go:123] Gathering logs for kubelet ...
	I1205 00:04:58.414910  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 00:04:58.477133  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161     664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:58.477409  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640     664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:58.477645  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:58.483742  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.484031  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.486976  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.489032  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.489223  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.489549  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.490248  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.490681  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800     664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
	W1205 00:04:58.493488  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.494069  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.494382  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.494707  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.494888  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.495213  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.495824  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.498358  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.498692  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.498876  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.499205  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.499388  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.499711  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.500296  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.500479  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.500805  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.500988  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.501317  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.501505  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.501834  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.502157  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.504721  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.505045  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.505250  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.505435  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.505771  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.505960  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.506540  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.506865  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.507048  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.507371  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.507552  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.507880  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.508061  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.508388  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.508586  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.508910  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.509240  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.509423  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.509780  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.509978  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.510307  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.510488  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.510817  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.511000  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.511326  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.511509  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.511836  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.514252  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1205 00:04:58.514266  216030 logs.go:123] Gathering logs for dmesg ...
	I1205 00:04:58.514280  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:04:58.534466  216030 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:04:58.534492  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 00:04:58.682096  216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
	I1205 00:04:58.682123  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:58.725405  216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
	I1205 00:04:58.725431  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:58.773720  216030 logs.go:123] Gathering logs for containerd ...
	I1205 00:04:58.773748  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 00:04:58.836186  216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
	I1205 00:04:58.836222  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:58.899828  216030 logs.go:123] Gathering logs for container status ...
	I1205 00:04:58.899854  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 00:04:58.942941  216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
	I1205 00:04:58.942971  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:59.018047  216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
	I1205 00:04:59.018114  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:59.103130  216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
	I1205 00:04:59.103163  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:59.151511  216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
	I1205 00:04:59.151539  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:59.192352  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:59.192377  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 00:04:59.192481  216030 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1205 00:04:59.192494  216030 out.go:270]   Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:59.192519  216030 out.go:270]   Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	  Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:59.192537  216030 out.go:270]   Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:59.192564  216030 out.go:270]   Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	  Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:59.192574  216030 out.go:270]   Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	  Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1205 00:04:59.192585  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:59.192591  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:09.194046  216030 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 00:05:09.205203  216030 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1205 00:05:09.208473  216030 out.go:201] 
	W1205 00:05:09.210824  216030 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1205 00:05:09.210861  216030 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1205 00:05:09.210876  216030 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1205 00:05:09.210882  216030 out.go:270] * 
	* 
	W1205 00:05:09.211748  216030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 00:05:09.215055  216030 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-066167
helpers_test.go:235: (dbg) docker inspect old-k8s-version-066167:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581",
	        "Created": "2024-12-04T23:56:18.334273178Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:58:52.621554643Z",
	            "FinishedAt": "2024-12-04T23:58:51.550436754Z"
	        },
	        "Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
	        "ResolvConfPath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/hostname",
	        "HostsPath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/hosts",
	        "LogPath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581-json.log",
	        "Name": "/old-k8s-version-066167",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-066167:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-066167",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d-init/diff:/var/lib/docker/overlay2/c12526196c20c242bf0c04aa29eed00ae00c2b2711c7a888146d1a43e3b60445/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-066167",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-066167/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-066167",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-066167",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-066167",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7607567a027d24764f241e2ca0319de6a4e929a7935befeeb6cc1fc8e78d51dc",
	            "SandboxKey": "/var/run/docker/netns/7607567a027d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-066167": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "38265aed5fbcc98804134ea94763ea6df8e2518dd3605389f6e3308899d8146d",
	                    "EndpointID": "4be59e607421da4aea843213718b25cde4b684c9316193e117bd24c54ec92fe2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-066167",
	                        "b95ea62abf92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-066167 -n old-k8s-version-066167
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-066167 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-066167 logs -n 25: (2.091934909s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-147448 sudo                                  | cilium-147448            | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC |                     |
	|         | containerd config dump                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-147448 sudo                                  | cilium-147448            | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |         |                     |                     |
	|         | --full --no-pager                                      |                          |         |         |                     |                     |
	| ssh     | -p cilium-147448 sudo                                  | cilium-147448            | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-147448 sudo find                             | cilium-147448            | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |         |                     |                     |
	| ssh     | -p cilium-147448 sudo crio                             | cilium-147448            | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC |                     |
	|         | config                                                 |                          |         |         |                     |                     |
	| delete  | -p cilium-147448                                       | cilium-147448            | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | 04 Dec 24 23:54 UTC |
	| start   | -p cert-expiration-688223                              | cert-expiration-688223   | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | 04 Dec 24 23:55 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-932373                               | force-systemd-env-932373 | jenkins | v1.34.0 | 04 Dec 24 23:55 UTC | 04 Dec 24 23:55 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-932373                            | force-systemd-env-932373 | jenkins | v1.34.0 | 04 Dec 24 23:55 UTC | 04 Dec 24 23:55 UTC |
	| start   | -p cert-options-516338                                 | cert-options-516338      | jenkins | v1.34.0 | 04 Dec 24 23:55 UTC | 04 Dec 24 23:56 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-516338 ssh                                | cert-options-516338      | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:56 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-516338 -- sudo                         | cert-options-516338      | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-516338                                 | cert-options-516338      | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:56 UTC |
	| start   | -p old-k8s-version-066167                              | old-k8s-version-066167   | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:58 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-688223                              | cert-expiration-688223   | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-066167        | old-k8s-version-066167   | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-066167                              | old-k8s-version-066167   | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-688223                              | cert-expiration-688223   | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
	| start   | -p no-preload-013030                                   | no-preload-013030        | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:59 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-066167             | old-k8s-version-066167   | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-066167                              | old-k8s-version-066167   | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-013030             | no-preload-013030        | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-013030                                   | no-preload-013030        | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-013030                  | no-preload-013030        | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-013030                                   | no-preload-013030        | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 00:00:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 00:00:22.771753  221677 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:00:22.772024  221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:00:22.772055  221677 out.go:358] Setting ErrFile to fd 2...
	I1205 00:00:22.772088  221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:00:22.772457  221677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1205 00:00:22.773042  221677 out.go:352] Setting JSON to false
	I1205 00:00:22.774794  221677 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6173,"bootTime":1733350650,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1205 00:00:22.774913  221677 start.go:139] virtualization:  
	I1205 00:00:22.778357  221677 out.go:177] * [no-preload-013030] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1205 00:00:22.781982  221677 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:00:22.782160  221677 notify.go:220] Checking for updates...
	I1205 00:00:22.787238  221677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:00:22.789958  221677 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1205 00:00:22.792620  221677 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1205 00:00:22.795369  221677 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1205 00:00:22.798053  221677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:00:22.801201  221677 config.go:182] Loaded profile config "no-preload-013030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1205 00:00:22.801755  221677 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:00:22.835793  221677 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 00:00:22.835968  221677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 00:00:22.906416  221677 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-05 00:00:22.896670658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1205 00:00:22.906524  221677 docker.go:318] overlay module found
	I1205 00:00:22.909298  221677 out.go:177] * Using the docker driver based on existing profile
	I1205 00:00:22.911873  221677 start.go:297] selected driver: docker
	I1205 00:00:22.911892  221677 start.go:901] validating driver "docker" against &{Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:00:22.911987  221677 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:00:22.912738  221677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 00:00:22.981677  221677 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-05 00:00:22.968050911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1205 00:00:22.982092  221677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 00:00:22.982125  221677 cni.go:84] Creating CNI manager for ""
	I1205 00:00:22.982169  221677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 00:00:22.982215  221677 start.go:340] cluster config:
	{Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:00:22.986916  221677 out.go:177] * Starting "no-preload-013030" primary control-plane node in "no-preload-013030" cluster
	I1205 00:00:22.989768  221677 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1205 00:00:22.992479  221677 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1205 00:00:22.995197  221677 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1205 00:00:22.995283  221677 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1205 00:00:22.995354  221677 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/config.json ...
	I1205 00:00:22.995663  221677 cache.go:107] acquiring lock: {Name:mk9da510fc959c7758b67ff4efdc922f3d1213ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.995750  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 00:00:22.995769  221677 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 111.594µs
	I1205 00:00:22.995778  221677 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 00:00:22.995795  221677 cache.go:107] acquiring lock: {Name:mk90b2210b9aa218ced54e9ad59b1559b758ea50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.995832  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1205 00:00:22.995841  221677 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 47.809µs
	I1205 00:00:22.995847  221677 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1205 00:00:22.995857  221677 cache.go:107] acquiring lock: {Name:mk824b140991ed1d076f69c25b5d723578c5bec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.995885  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1205 00:00:22.995895  221677 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 39.17µs
	I1205 00:00:22.995902  221677 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1205 00:00:22.995912  221677 cache.go:107] acquiring lock: {Name:mkef398d006b259cd437f7ff4d09d913391bb913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.995939  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1205 00:00:22.995950  221677 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 38.76µs
	I1205 00:00:22.995963  221677 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1205 00:00:22.995976  221677 cache.go:107] acquiring lock: {Name:mkfa105860076730031a80b15339e0db74389978 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.996007  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1205 00:00:22.996017  221677 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 41.722µs
	I1205 00:00:22.996023  221677 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1205 00:00:22.996034  221677 cache.go:107] acquiring lock: {Name:mk4fff236731e18fbfdb75157a24d79a08ae90e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.996064  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1205 00:00:22.996073  221677 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 40.343µs
	I1205 00:00:22.996079  221677 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1205 00:00:22.996099  221677 cache.go:107] acquiring lock: {Name:mka5df1fb95f4640c2fcb4dd5c6f811b518cfd11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.996130  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1205 00:00:22.996139  221677 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 41.697µs
	I1205 00:00:22.996145  221677 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1205 00:00:22.996154  221677 cache.go:107] acquiring lock: {Name:mkc76832b9384f9aff33c7cfc2d625069b4bd563 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:22.996186  221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1205 00:00:22.996194  221677 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 41.484µs
	I1205 00:00:22.996200  221677 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1205 00:00:22.996206  221677 cache.go:87] Successfully saved all images to host disk.
	I1205 00:00:23.024680  221677 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1205 00:00:23.024707  221677 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1205 00:00:23.024725  221677 cache.go:194] Successfully downloaded all kic artifacts
	I1205 00:00:23.024749  221677 start.go:360] acquireMachinesLock for no-preload-013030: {Name:mkf3466c8e736c81de5b2facb9709787c162d97b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 00:00:23.024913  221677 start.go:364] duration metric: took 137.259µs to acquireMachinesLock for "no-preload-013030"
	I1205 00:00:23.024959  221677 start.go:96] Skipping create...Using existing machine configuration
	I1205 00:00:23.024967  221677 fix.go:54] fixHost starting: 
	I1205 00:00:23.025323  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:23.043868  221677 fix.go:112] recreateIfNeeded on no-preload-013030: state=Stopped err=<nil>
	W1205 00:00:23.043909  221677 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 00:00:23.048784  221677 out.go:177] * Restarting existing docker container for "no-preload-013030" ...
	I1205 00:00:23.624178  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:26.123881  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:23.051562  221677 cli_runner.go:164] Run: docker start no-preload-013030
	I1205 00:00:23.380258  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:23.405882  221677 kic.go:430] container "no-preload-013030" state is running.
	I1205 00:00:23.406280  221677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-013030
	I1205 00:00:23.434868  221677 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/config.json ...
	I1205 00:00:23.435095  221677 machine.go:93] provisionDockerMachine start ...
	I1205 00:00:23.435152  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:23.463531  221677 main.go:141] libmachine: Using SSH client type: native
	I1205 00:00:23.463789  221677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1205 00:00:23.463798  221677 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 00:00:23.465070  221677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53028->127.0.0.1:33068: read: connection reset by peer
	I1205 00:00:26.593262  221677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-013030
	
	I1205 00:00:26.593297  221677 ubuntu.go:169] provisioning hostname "no-preload-013030"
	I1205 00:00:26.593359  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:26.611479  221677 main.go:141] libmachine: Using SSH client type: native
	I1205 00:00:26.611725  221677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1205 00:00:26.611737  221677 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-013030 && echo "no-preload-013030" | sudo tee /etc/hostname
	I1205 00:00:26.751962  221677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-013030
	
	I1205 00:00:26.752060  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:26.768780  221677 main.go:141] libmachine: Using SSH client type: native
	I1205 00:00:26.769029  221677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1205 00:00:26.769051  221677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-013030' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-013030/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-013030' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 00:00:26.898282  221677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 00:00:26.898375  221677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-2283/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-2283/.minikube}
	I1205 00:00:26.898410  221677 ubuntu.go:177] setting up certificates
	I1205 00:00:26.898454  221677 provision.go:84] configureAuth start
	I1205 00:00:26.898563  221677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-013030
	I1205 00:00:26.916483  221677 provision.go:143] copyHostCerts
	I1205 00:00:26.916566  221677 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem, removing ...
	I1205 00:00:26.916578  221677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem
	I1205 00:00:26.916658  221677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem (1082 bytes)
	I1205 00:00:26.916769  221677 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem, removing ...
	I1205 00:00:26.916774  221677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem
	I1205 00:00:26.916800  221677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem (1123 bytes)
	I1205 00:00:26.916853  221677 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem, removing ...
	I1205 00:00:26.916857  221677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem
	I1205 00:00:26.916880  221677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem (1679 bytes)
	I1205 00:00:26.916927  221677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem org=jenkins.no-preload-013030 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-013030]
	I1205 00:00:27.063684  221677 provision.go:177] copyRemoteCerts
	I1205 00:00:27.063761  221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 00:00:27.063803  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:27.081682  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:27.181983  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 00:00:27.207152  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 00:00:27.231598  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 00:00:27.256704  221677 provision.go:87] duration metric: took 358.220423ms to configureAuth
	I1205 00:00:27.256781  221677 ubuntu.go:193] setting minikube options for container-runtime
	I1205 00:00:27.256991  221677 config.go:182] Loaded profile config "no-preload-013030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1205 00:00:27.257006  221677 machine.go:96] duration metric: took 3.821903614s to provisionDockerMachine
	I1205 00:00:27.257016  221677 start.go:293] postStartSetup for "no-preload-013030" (driver="docker")
	I1205 00:00:27.257026  221677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 00:00:27.257077  221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 00:00:27.257196  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:27.273570  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:27.362324  221677 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 00:00:27.365632  221677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1205 00:00:27.365681  221677 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1205 00:00:27.365708  221677 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1205 00:00:27.365721  221677 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1205 00:00:27.365732  221677 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/addons for local assets ...
	I1205 00:00:27.365807  221677 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/files for local assets ...
	I1205 00:00:27.365920  221677 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem -> 77362.pem in /etc/ssl/certs
	I1205 00:00:27.366065  221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 00:00:27.374680  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /etc/ssl/certs/77362.pem (1708 bytes)
	I1205 00:00:27.400024  221677 start.go:296] duration metric: took 142.993536ms for postStartSetup
	I1205 00:00:27.400152  221677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 00:00:27.400201  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:27.416549  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:27.503274  221677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1205 00:00:27.508372  221677 fix.go:56] duration metric: took 4.483399982s for fixHost
	I1205 00:00:27.508416  221677 start.go:83] releasing machines lock for "no-preload-013030", held for 4.483485805s
	I1205 00:00:27.508502  221677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-013030
	I1205 00:00:27.525451  221677 ssh_runner.go:195] Run: cat /version.json
	I1205 00:00:27.525536  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:27.525623  221677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 00:00:27.525677  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:27.551809  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:27.555961  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:27.644526  221677 ssh_runner.go:195] Run: systemctl --version
	I1205 00:00:27.787747  221677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 00:00:27.792224  221677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1205 00:00:27.810665  221677 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1205 00:00:27.810760  221677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 00:00:27.819727  221677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 00:00:27.819758  221677 start.go:495] detecting cgroup driver to use...
	I1205 00:00:27.819811  221677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1205 00:00:27.819876  221677 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 00:00:27.833971  221677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 00:00:27.845925  221677 docker.go:217] disabling cri-docker service (if available) ...
	I1205 00:00:27.846045  221677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 00:00:27.859493  221677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 00:00:27.871602  221677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 00:00:27.973945  221677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 00:00:28.083981  221677 docker.go:233] disabling docker service ...
	I1205 00:00:28.084077  221677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 00:00:28.101680  221677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 00:00:28.116262  221677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 00:00:28.214291  221677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 00:00:28.307646  221677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 00:00:28.318962  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 00:00:28.336388  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1205 00:00:28.347035  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 00:00:28.357426  221677 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 00:00:28.357510  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 00:00:28.368184  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 00:00:28.378773  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 00:00:28.389054  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 00:00:28.398909  221677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 00:00:28.408434  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 00:00:28.418511  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 00:00:28.428426  221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 00:00:28.439623  221677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 00:00:28.449360  221677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 00:00:28.458018  221677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:00:28.549008  221677 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 00:00:28.749668  221677 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1205 00:00:28.749786  221677 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1205 00:00:28.754237  221677 start.go:563] Will wait 60s for crictl version
	I1205 00:00:28.754355  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:00:28.758057  221677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 00:00:28.799691  221677 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1205 00:00:28.799781  221677 ssh_runner.go:195] Run: containerd --version
	I1205 00:00:28.822981  221677 ssh_runner.go:195] Run: containerd --version
	I1205 00:00:28.854936  221677 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
	I1205 00:00:28.857684  221677 cli_runner.go:164] Run: docker network inspect no-preload-013030 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1205 00:00:28.872864  221677 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1205 00:00:28.876372  221677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 00:00:28.886902  221677 kubeadm.go:883] updating cluster {Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 00:00:28.887053  221677 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1205 00:00:28.887110  221677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 00:00:28.928079  221677 containerd.go:627] all images are preloaded for containerd runtime.
	I1205 00:00:28.928104  221677 cache_images.go:84] Images are preloaded, skipping loading
	I1205 00:00:28.928112  221677 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
	I1205 00:00:28.928215  221677 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-013030 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 00:00:28.928282  221677 ssh_runner.go:195] Run: sudo crictl info
	I1205 00:00:28.971509  221677 cni.go:84] Creating CNI manager for ""
	I1205 00:00:28.971582  221677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1205 00:00:28.971607  221677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 00:00:28.971660  221677 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-013030 NodeName:no-preload-013030 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 00:00:28.971844  221677 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-013030"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 00:00:28.971967  221677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 00:00:28.984809  221677 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 00:00:28.984910  221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 00:00:28.996457  221677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1205 00:00:29.016978  221677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 00:00:29.039571  221677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
	I1205 00:00:29.059288  221677 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1205 00:00:29.063109  221677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 00:00:29.074544  221677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:00:29.171709  221677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:00:29.187636  221677 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030 for IP: 192.168.85.2
	I1205 00:00:29.187659  221677 certs.go:194] generating shared ca certs ...
	I1205 00:00:29.187678  221677 certs.go:226] acquiring lock for ca certs: {Name:mk1d98569ca320b9ee7e00b709eb6c9a159130d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:00:29.187852  221677 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key
	I1205 00:00:29.187909  221677 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key
	I1205 00:00:29.187921  221677 certs.go:256] generating profile certs ...
	I1205 00:00:29.188024  221677 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.key
	I1205 00:00:29.188103  221677 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/apiserver.key.8c251c27
	I1205 00:00:29.188157  221677 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/proxy-client.key
	I1205 00:00:29.188318  221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem (1338 bytes)
	W1205 00:00:29.188361  221677 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736_empty.pem, impossibly tiny 0 bytes
	I1205 00:00:29.188373  221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 00:00:29.188404  221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem (1082 bytes)
	I1205 00:00:29.188436  221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem (1123 bytes)
	I1205 00:00:29.188469  221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem (1679 bytes)
	I1205 00:00:29.188520  221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem (1708 bytes)
	I1205 00:00:29.189242  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 00:00:29.216435  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1205 00:00:29.241407  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 00:00:29.266772  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 00:00:29.291726  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 00:00:29.317210  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 00:00:29.345302  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 00:00:29.374962  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 00:00:29.418225  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem --> /usr/share/ca-certificates/7736.pem (1338 bytes)
	I1205 00:00:29.445814  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /usr/share/ca-certificates/77362.pem (1708 bytes)
	I1205 00:00:29.473483  221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 00:00:29.499424  221677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 00:00:29.527810  221677 ssh_runner.go:195] Run: openssl version
	I1205 00:00:29.535753  221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7736.pem && ln -fs /usr/share/ca-certificates/7736.pem /etc/ssl/certs/7736.pem"
	I1205 00:00:29.546677  221677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7736.pem
	I1205 00:00:29.550834  221677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:19 /usr/share/ca-certificates/7736.pem
	I1205 00:00:29.550907  221677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7736.pem
	I1205 00:00:29.558496  221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7736.pem /etc/ssl/certs/51391683.0"
	I1205 00:00:29.568003  221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77362.pem && ln -fs /usr/share/ca-certificates/77362.pem /etc/ssl/certs/77362.pem"
	I1205 00:00:29.577950  221677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77362.pem
	I1205 00:00:29.581831  221677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:19 /usr/share/ca-certificates/77362.pem
	I1205 00:00:29.581898  221677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77362.pem
	I1205 00:00:29.588733  221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77362.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 00:00:29.597904  221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 00:00:29.612020  221677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:00:29.616296  221677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:11 /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:00:29.616413  221677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 00:00:29.628338  221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 00:00:29.637879  221677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 00:00:29.641327  221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 00:00:29.648039  221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 00:00:29.654960  221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 00:00:29.662103  221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 00:00:29.669444  221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 00:00:29.676413  221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 00:00:29.683895  221677 kubeadm.go:392] StartCluster: {Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 00:00:29.684039  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1205 00:00:29.684137  221677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 00:00:29.730293  221677 cri.go:89] found id: "3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
	I1205 00:00:29.730356  221677 cri.go:89] found id: "fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
	I1205 00:00:29.730374  221677 cri.go:89] found id: "8c5755436bd099e0109e8164517c428a7492b4ba0b822bf3510106d259f125a0"
	I1205 00:00:29.730394  221677 cri.go:89] found id: "f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
	I1205 00:00:29.730398  221677 cri.go:89] found id: "627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
	I1205 00:00:29.730402  221677 cri.go:89] found id: "8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
	I1205 00:00:29.730405  221677 cri.go:89] found id: "e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
	I1205 00:00:29.730408  221677 cri.go:89] found id: "55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
	I1205 00:00:29.730411  221677 cri.go:89] found id: ""
	I1205 00:00:29.730479  221677 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1205 00:00:29.742877  221677 cri.go:116] JSON = null
	W1205 00:00:29.742953  221677 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1205 00:00:29.743046  221677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 00:00:29.751751  221677 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 00:00:29.751775  221677 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 00:00:29.751847  221677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 00:00:29.760713  221677 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 00:00:29.761508  221677 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-013030" does not appear in /home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1205 00:00:29.761788  221677 kubeconfig.go:62] /home/jenkins/minikube-integration/20045-2283/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-013030" cluster setting kubeconfig missing "no-preload-013030" context setting]
	I1205 00:00:29.762765  221677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:00:29.764269  221677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 00:00:29.773839  221677 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I1205 00:00:29.773871  221677 kubeadm.go:597] duration metric: took 22.089863ms to restartPrimaryControlPlane
	I1205 00:00:29.773880  221677 kubeadm.go:394] duration metric: took 89.995888ms to StartCluster
	I1205 00:00:29.773897  221677 settings.go:142] acquiring lock: {Name:mkf88c0c5090e30b7bb8c2e4a8e4f7c9dd68316c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:00:29.773966  221677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1205 00:00:29.774915  221677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 00:00:29.775159  221677 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1205 00:00:29.775514  221677 config.go:182] Loaded profile config "no-preload-013030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1205 00:00:29.775598  221677 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 00:00:29.775710  221677 addons.go:69] Setting storage-provisioner=true in profile "no-preload-013030"
	I1205 00:00:29.775736  221677 addons.go:234] Setting addon storage-provisioner=true in "no-preload-013030"
	W1205 00:00:29.775747  221677 addons.go:243] addon storage-provisioner should already be in state true
	I1205 00:00:29.775769  221677 host.go:66] Checking if "no-preload-013030" exists ...
	I1205 00:00:29.776260  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:29.776667  221677 addons.go:69] Setting default-storageclass=true in profile "no-preload-013030"
	I1205 00:00:29.776689  221677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-013030"
	I1205 00:00:29.777031  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:29.777095  221677 addons.go:69] Setting metrics-server=true in profile "no-preload-013030"
	I1205 00:00:29.777154  221677 addons.go:234] Setting addon metrics-server=true in "no-preload-013030"
	W1205 00:00:29.777163  221677 addons.go:243] addon metrics-server should already be in state true
	I1205 00:00:29.777261  221677 host.go:66] Checking if "no-preload-013030" exists ...
	I1205 00:00:29.777812  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:29.779136  221677 addons.go:69] Setting dashboard=true in profile "no-preload-013030"
	I1205 00:00:29.779157  221677 addons.go:234] Setting addon dashboard=true in "no-preload-013030"
	W1205 00:00:29.779164  221677 addons.go:243] addon dashboard should already be in state true
	I1205 00:00:29.779186  221677 host.go:66] Checking if "no-preload-013030" exists ...
	I1205 00:00:29.779788  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:29.780266  221677 out.go:177] * Verifying Kubernetes components...
	I1205 00:00:29.783147  221677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 00:00:29.847678  221677 addons.go:234] Setting addon default-storageclass=true in "no-preload-013030"
	W1205 00:00:29.847706  221677 addons.go:243] addon default-storageclass should already be in state true
	I1205 00:00:29.847733  221677 host.go:66] Checking if "no-preload-013030" exists ...
	I1205 00:00:29.853116  221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
	I1205 00:00:29.872872  221677 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 00:00:29.876631  221677 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 00:00:29.880331  221677 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 00:00:29.880372  221677 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 00:00:29.880443  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:29.880623  221677 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 00:00:29.886454  221677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 00:00:29.886590  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:29.892273  221677 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1205 00:00:29.896035  221677 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1205 00:00:28.141044  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:30.641472  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:29.901542  221677 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 00:00:29.901626  221677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 00:00:29.901709  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:29.902419  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1205 00:00:29.902439  221677 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1205 00:00:29.902502  221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
	I1205 00:00:29.956071  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:29.956610  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:29.967217  221677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 00:00:29.981943  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:29.991877  221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
	I1205 00:00:30.005080  221677 node_ready.go:35] waiting up to 6m0s for node "no-preload-013030" to be "Ready" ...
	I1205 00:00:30.231598  221677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 00:00:30.231678  221677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 00:00:30.281468  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1205 00:00:30.281546  221677 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1205 00:00:30.287468  221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 00:00:30.342249  221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 00:00:30.350973  221677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 00:00:30.351054  221677 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 00:00:30.427776  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1205 00:00:30.427861  221677 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1205 00:00:30.514892  221677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 00:00:30.514983  221677 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 00:00:30.583523  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1205 00:00:30.583609  221677 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1205 00:00:30.658635  221677 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 00:00:30.658724  221677 retry.go:31] will retry after 340.301102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1205 00:00:30.767621  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1205 00:00:30.767683  221677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1205 00:00:30.824694  221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 00:00:30.826945  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1205 00:00:30.826973  221677 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1205 00:00:30.875477  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1205 00:00:30.875506  221677 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1205 00:00:30.946308  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1205 00:00:30.946334  221677 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1205 00:00:30.993254  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1205 00:00:30.993279  221677 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1205 00:00:31.000120  221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 00:00:31.067610  221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 00:00:31.067636  221677 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1205 00:00:31.161892  221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1205 00:00:33.124970  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:35.125567  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:37.125828  216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:35.416417  221677 node_ready.go:49] node "no-preload-013030" has status "Ready":"True"
	I1205 00:00:35.416494  221677 node_ready.go:38] duration metric: took 5.411360289s for node "no-preload-013030" to be "Ready" ...
	I1205 00:00:35.416519  221677 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:00:35.537640  221677 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xgmhd" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.625207  221677 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgmhd" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:35.625229  221677 pod_ready.go:82] duration metric: took 87.512881ms for pod "coredns-7c65d6cfc9-xgmhd" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.625241  221677 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.639987  221677 pod_ready.go:93] pod "etcd-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:35.640013  221677 pod_ready.go:82] duration metric: took 14.764467ms for pod "etcd-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.640027  221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.670217  221677 pod_ready.go:93] pod "kube-apiserver-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:35.670242  221677 pod_ready.go:82] duration metric: took 30.206499ms for pod "kube-apiserver-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.670254  221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.679710  221677 pod_ready.go:93] pod "kube-controller-manager-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:35.679734  221677 pod_ready.go:82] duration metric: took 9.471351ms for pod "kube-controller-manager-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.679748  221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7qgmh" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.685719  221677 pod_ready.go:93] pod "kube-proxy-7qgmh" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:35.685756  221677 pod_ready.go:82] duration metric: took 6.001285ms for pod "kube-proxy-7qgmh" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.685767  221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:35.848196  221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.505863869s)
	I1205 00:00:37.694670  221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:38.308016  221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.483235281s)
	I1205 00:00:38.308051  221677 addons.go:475] Verifying addon metrics-server=true in "no-preload-013030"
	I1205 00:00:38.379601  221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.379439848s)
	I1205 00:00:38.495688  221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.333749321s)
	I1205 00:00:38.498416  221677 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-013030 addons enable metrics-server
	
	I1205 00:00:38.501159  221677 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1205 00:00:39.622706  216030 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:39.622731  216030 pod_ready.go:82] duration metric: took 1m7.50576737s for pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:39.622744  216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:39.627598  216030 pod_ready.go:93] pod "kube-proxy-xh97b" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:39.627663  216030 pod_ready.go:82] duration metric: took 4.909057ms for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:39.627682  216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:41.635075  216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:38.503913  221677 addons.go:510] duration metric: took 8.728324608s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1205 00:00:39.695406  221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:41.698192  221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:44.133262  216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:45.634685  216030 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:45.634754  216030 pod_ready.go:82] duration metric: took 6.007062956s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:45.634781  216030 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:44.192232  221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:46.192379  221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:47.641160  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:50.142040  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:48.692355  221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:49.691651  221677 pod_ready.go:93] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
	I1205 00:00:49.691676  221677 pod_ready.go:82] duration metric: took 14.005902003s for pod "kube-scheduler-no-preload-013030" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:49.691688  221677 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace to be "Ready" ...
	I1205 00:00:51.698244  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:52.640397  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:54.641624  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:57.141636  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:54.199619  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:56.697770  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:59.640966  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:01.641368  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:00:59.198473  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:01.199043  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:04.141819  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:06.641245  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:03.698734  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:06.197658  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:08.643778  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:11.142085  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:08.197916  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:10.198915  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:12.697351  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:13.142210  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:15.143248  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:14.698896  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:17.197638  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:17.640366  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:19.642863  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:22.141401  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:19.198416  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:21.198698  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:24.141731  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:26.640254  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:23.697698  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:25.698515  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:27.698708  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:29.141453  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:31.640747  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:30.197920  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:32.698572  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:33.640815  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:35.640860  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:35.199281  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:37.698663  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:37.641357  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:40.141576  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:42.142551  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:40.197605  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:42.200552  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:44.640790  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:46.640978  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:44.698301  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:46.698450  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:49.140930  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:51.640575  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:49.198002  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:51.198178  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:54.141681  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:56.640948  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:53.698451  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:56.198918  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:58.641251  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:01.140947  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:01:58.697785  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:00.698074  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:02.703184  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:03.141906  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:05.641253  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:05.198730  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:07.697525  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:08.140771  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:10.141503  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:12.141977  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:09.698188  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:11.698438  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:14.640789  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:16.640823  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:13.698487  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:16.199130  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:18.641073  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:20.641191  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:18.697781  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:20.697982  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:22.641262  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:25.142092  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:27.142352  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:23.197363  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:25.197966  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:27.698390  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:29.640668  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:32.144704  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:30.198580  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:32.698095  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:34.641267  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:37.141776  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:34.699022  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:36.699177  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:39.640880  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:42.143365  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:39.198254  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:41.697801  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:44.641367  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:47.141280  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:43.697979  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:45.698225  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:47.698311  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:49.141788  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:51.141822  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:49.698861  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:52.198596  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:53.186734  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:55.641386  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:54.697722  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:57.197390  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:58.141377  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:00.190535  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:02:59.197621  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:01.198155  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:02.640575  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:04.641218  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:07.141448  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:03.200383  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:05.697829  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:07.698244  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:09.142518  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:11.646299  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:09.698594  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:11.699095  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:14.140064  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:16.141711  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:14.198278  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:16.698329  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:18.640395  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:20.641321  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:19.197542  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:21.198082  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:23.141469  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:25.142104  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:23.697909  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:26.198217  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:27.641721  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:30.141599  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:28.697745  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:30.697967  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:32.698010  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:32.640894  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:34.641205  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:37.141279  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:34.701701  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:37.197478  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:39.141499  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:41.141843  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:39.697685  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:41.698710  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:43.142457  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:45.642402  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:43.702407  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:46.197923  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:48.141074  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:50.640840  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:48.698167  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:51.197759  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:52.640941  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:55.142516  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:53.700214  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:56.198709  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:57.641042  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:00.258272  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:03:58.698614  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:01.202258  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:02.640707  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:04.640786  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:06.640980  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:03.698025  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:05.702558  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:08.641054  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:11.146089  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:08.197928  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:10.198008  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:12.697293  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:13.640923  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:16.141477  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:14.698200  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:16.698405  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:18.641364  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:21.154913  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:19.199197  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:21.698068  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:23.640479  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:25.641079  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:24.197565  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:26.197872  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:27.642694  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:30.141328  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:32.142061  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:28.698663  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:31.197409  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:34.646681  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:37.141273  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:33.198422  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:35.709621  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:39.142582  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:41.641272  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:38.197579  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:40.199264  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:42.199775  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:44.154672  216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:45.641935  216030 pod_ready.go:82] duration metric: took 4m0.007127886s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
	E1205 00:04:45.641961  216030 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 00:04:45.641970  216030 pod_ready.go:39] duration metric: took 5m23.689087349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:04:45.641984  216030 api_server.go:52] waiting for apiserver process to appear ...
	I1205 00:04:45.642014  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:04:45.642080  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:04:45.701396  216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:45.701417  216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:45.701422  216030 cri.go:89] found id: ""
	I1205 00:04:45.701428  216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
	I1205 00:04:45.701487  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.706274  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.709870  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 00:04:45.709950  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:04:45.752726  216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:45.752759  216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:45.752764  216030 cri.go:89] found id: ""
	I1205 00:04:45.752771  216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
	I1205 00:04:45.752844  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.756595  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.759984  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 00:04:45.760054  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:04:45.802699  216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:45.802722  216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:45.802733  216030 cri.go:89] found id: ""
	I1205 00:04:45.802741  216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
	I1205 00:04:45.802798  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.806565  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.810357  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:04:45.810434  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:04:45.853797  216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:45.853818  216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:45.853823  216030 cri.go:89] found id: ""
	I1205 00:04:45.853832  216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
	I1205 00:04:45.853889  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.857263  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.862164  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:04:45.862243  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:04:45.902320  216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:45.902409  216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:45.902423  216030 cri.go:89] found id: ""
	I1205 00:04:45.902431  216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
	I1205 00:04:45.902501  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.906129  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.909489  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:04:45.909590  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:04:45.951353  216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:45.951376  216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:45.951381  216030 cri.go:89] found id: ""
	I1205 00:04:45.951388  216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
	I1205 00:04:45.951449  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.955123  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:45.958548  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 00:04:45.958621  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:04:46.013456  216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:46.013484  216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:46.013489  216030 cri.go:89] found id: ""
	I1205 00:04:46.013497  216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
	I1205 00:04:46.013620  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.018166  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.022058  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 00:04:46.022188  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 00:04:46.071154  216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:46.071186  216030 cri.go:89] found id: ""
	I1205 00:04:46.071195  216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
	I1205 00:04:46.071278  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.075279  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 00:04:46.075401  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 00:04:46.115487  216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:46.115560  216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:46.115580  216030 cri.go:89] found id: ""
	I1205 00:04:46.115593  216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
	I1205 00:04:46.115669  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.119363  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:46.122924  216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
	I1205 00:04:46.122956  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:46.164473  216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
	I1205 00:04:46.164503  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:46.219238  216030 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:04:46.219270  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 00:04:46.367441  216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
	I1205 00:04:46.367470  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:46.406779  216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
	I1205 00:04:46.406805  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:46.454765  216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
	I1205 00:04:46.454792  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:46.498510  216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
	I1205 00:04:46.498538  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:46.537447  216030 logs.go:123] Gathering logs for containerd ...
	I1205 00:04:46.537476  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 00:04:46.617148  216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
	I1205 00:04:46.617196  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:46.667834  216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
	I1205 00:04:46.667985  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:46.732274  216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
	I1205 00:04:46.732303  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:46.792624  216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
	I1205 00:04:46.792656  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:46.830707  216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
	I1205 00:04:46.830736  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:46.875737  216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
	I1205 00:04:46.875769  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:46.960343  216030 logs.go:123] Gathering logs for dmesg ...
	I1205 00:04:46.960376  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:04:46.978879  216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
	I1205 00:04:46.978908  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:47.043184  216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
	I1205 00:04:47.043220  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:47.095108  216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
	I1205 00:04:47.095137  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:47.138073  216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
	I1205 00:04:47.138112  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:44.698855  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:46.698935  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:47.200917  216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
	I1205 00:04:47.200959  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:47.290017  216030 logs.go:123] Gathering logs for container status ...
	I1205 00:04:47.290077  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 00:04:47.355835  216030 logs.go:123] Gathering logs for kubelet ...
	I1205 00:04:47.355861  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 00:04:47.415957  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161     664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:47.416229  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640     664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:47.416462  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:47.422607  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.422894  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.425873  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.427977  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.428166  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.428495  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.429164  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.429606  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800     664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
	W1205 00:04:47.432365  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.432950  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.433315  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.433645  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.433831  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.434156  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.434742  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.437168  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.437499  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.437686  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.438017  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.438200  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.438538  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.439120  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.439303  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.439631  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.439814  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.440143  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.440328  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.440657  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.441030  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.443463  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:47.443782  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.443980  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.444164  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.444490  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.444674  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.445266  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.445596  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.445781  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.446106  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.446289  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.446622  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.446806  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.447136  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.447319  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.447646  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.447972  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.448154  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.448468  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.448666  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.448992  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.449185  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.449511  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.449719  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.450052  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.450238  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1205 00:04:47.450255  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:47.450266  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 00:04:47.450326  216030 out.go:270] X Problems detected in kubelet:
	W1205 00:04:47.450338  216030 out.go:270]   Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.450346  216030 out.go:270]   Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.450353  216030 out.go:270]   Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:47.450362  216030 out.go:270]   Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:47.450370  216030 out.go:270]   Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1205 00:04:47.450378  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:47.450384  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:04:49.198557  221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
	I1205 00:04:49.698032  221677 pod_ready.go:82] duration metric: took 4m0.00632943s for pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace to be "Ready" ...
	E1205 00:04:49.698060  221677 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 00:04:49.698069  221677 pod_ready.go:39] duration metric: took 4m14.281527329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 00:04:49.698084  221677 api_server.go:52] waiting for apiserver process to appear ...
	I1205 00:04:49.698114  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:04:49.698172  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:04:49.742404  221677 cri.go:89] found id: "ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
	I1205 00:04:49.742429  221677 cri.go:89] found id: "e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
	I1205 00:04:49.742433  221677 cri.go:89] found id: ""
	I1205 00:04:49.742441  221677 logs.go:282] 2 containers: [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c]
	I1205 00:04:49.742497  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.746365  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.750155  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 00:04:49.750233  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:04:49.789016  221677 cri.go:89] found id: "7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
	I1205 00:04:49.789040  221677 cri.go:89] found id: "55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
	I1205 00:04:49.789046  221677 cri.go:89] found id: ""
	I1205 00:04:49.789053  221677 logs.go:282] 2 containers: [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778]
	I1205 00:04:49.789161  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.792800  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.796296  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 00:04:49.796370  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:04:49.833967  221677 cri.go:89] found id: "340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
	I1205 00:04:49.833990  221677 cri.go:89] found id: "3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
	I1205 00:04:49.833996  221677 cri.go:89] found id: ""
	I1205 00:04:49.834004  221677 logs.go:282] 2 containers: [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9]
	I1205 00:04:49.834082  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.837887  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.841454  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:04:49.841550  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:04:49.878206  221677 cri.go:89] found id: "253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
	I1205 00:04:49.878231  221677 cri.go:89] found id: "627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
	I1205 00:04:49.878235  221677 cri.go:89] found id: ""
	I1205 00:04:49.878243  221677 logs.go:282] 2 containers: [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6]
	I1205 00:04:49.878302  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.882058  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.885685  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:04:49.885762  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:04:49.923095  221677 cri.go:89] found id: "80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
	I1205 00:04:49.923179  221677 cri.go:89] found id: "f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
	I1205 00:04:49.923199  221677 cri.go:89] found id: ""
	I1205 00:04:49.923211  221677 logs.go:282] 2 containers: [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed]
	I1205 00:04:49.923274  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.926709  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.930399  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:04:49.930497  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:04:49.973709  221677 cri.go:89] found id: "13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
	I1205 00:04:49.973774  221677 cri.go:89] found id: "8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
	I1205 00:04:49.973795  221677 cri.go:89] found id: ""
	I1205 00:04:49.973820  221677 logs.go:282] 2 containers: [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654]
	I1205 00:04:49.973896  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.977814  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:49.981292  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 00:04:49.981394  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:04:50.029660  221677 cri.go:89] found id: "0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
	I1205 00:04:50.029727  221677 cri.go:89] found id: "fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
	I1205 00:04:50.029745  221677 cri.go:89] found id: ""
	I1205 00:04:50.029760  221677 logs.go:282] 2 containers: [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be]
	I1205 00:04:50.029823  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:50.034042  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:50.038266  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 00:04:50.038363  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 00:04:50.079366  221677 cri.go:89] found id: "317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
	I1205 00:04:50.079398  221677 cri.go:89] found id: ""
	I1205 00:04:50.079406  221677 logs.go:282] 1 containers: [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393]
	I1205 00:04:50.079464  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:50.083616  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 00:04:50.083723  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 00:04:50.123759  221677 cri.go:89] found id: "473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
	I1205 00:04:50.123787  221677 cri.go:89] found id: "73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
	I1205 00:04:50.123793  221677 cri.go:89] found id: ""
	I1205 00:04:50.123800  221677 logs.go:282] 2 containers: [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e]
	I1205 00:04:50.123858  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:50.127613  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:04:50.132424  221677 logs.go:123] Gathering logs for coredns [3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9] ...
	I1205 00:04:50.132452  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
	I1205 00:04:50.177377  221677 logs.go:123] Gathering logs for kindnet [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3] ...
	I1205 00:04:50.177411  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
	I1205 00:04:50.221418  221677 logs.go:123] Gathering logs for kubernetes-dashboard [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393] ...
	I1205 00:04:50.221450  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
	I1205 00:04:50.272318  221677 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:04:50.272349  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 00:04:50.429061  221677 logs.go:123] Gathering logs for etcd [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502] ...
	I1205 00:04:50.429091  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
	I1205 00:04:50.479805  221677 logs.go:123] Gathering logs for etcd [55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778] ...
	I1205 00:04:50.479835  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
	I1205 00:04:50.541743  221677 logs.go:123] Gathering logs for kubelet ...
	I1205 00:04:50.541781  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 00:04:50.592967  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.479571     658 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:04:50.593244  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.479778     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:04:50.593431  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480370     658 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:04:50.593674  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:04:50.593861  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651     658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
	W1205 00:04:50.594094  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748     658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:04:50.594285  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143     658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:04:50.594508  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	I1205 00:04:50.644820  221677 logs.go:123] Gathering logs for kube-apiserver [e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c] ...
	I1205 00:04:50.644860  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
	I1205 00:04:50.713283  221677 logs.go:123] Gathering logs for kube-scheduler [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64] ...
	I1205 00:04:50.713321  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
	I1205 00:04:50.773677  221677 logs.go:123] Gathering logs for kube-scheduler [627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6] ...
	I1205 00:04:50.773727  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
	I1205 00:04:50.850795  221677 logs.go:123] Gathering logs for kube-controller-manager [8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654] ...
	I1205 00:04:50.850871  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
	I1205 00:04:50.929818  221677 logs.go:123] Gathering logs for storage-provisioner [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8] ...
	I1205 00:04:50.929890  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
	I1205 00:04:50.974600  221677 logs.go:123] Gathering logs for storage-provisioner [73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e] ...
	I1205 00:04:50.974633  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
	I1205 00:04:51.032274  221677 logs.go:123] Gathering logs for containerd ...
	I1205 00:04:51.032302  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 00:04:51.101703  221677 logs.go:123] Gathering logs for dmesg ...
	I1205 00:04:51.101797  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:04:51.120293  221677 logs.go:123] Gathering logs for kube-apiserver [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad] ...
	I1205 00:04:51.120325  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
	I1205 00:04:51.191956  221677 logs.go:123] Gathering logs for coredns [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc] ...
	I1205 00:04:51.192048  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
	I1205 00:04:51.249282  221677 logs.go:123] Gathering logs for kube-proxy [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c] ...
	I1205 00:04:51.249313  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
	I1205 00:04:51.297608  221677 logs.go:123] Gathering logs for kube-proxy [f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed] ...
	I1205 00:04:51.297638  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
	I1205 00:04:51.340440  221677 logs.go:123] Gathering logs for kube-controller-manager [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94] ...
	I1205 00:04:51.340467  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
	I1205 00:04:51.412346  221677 logs.go:123] Gathering logs for kindnet [fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be] ...
	I1205 00:04:51.412383  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
	I1205 00:04:51.457251  221677 logs.go:123] Gathering logs for container status ...
	I1205 00:04:51.457284  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 00:04:51.506219  221677 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:51.506242  221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 00:04:51.506295  221677 out.go:270] X Problems detected in kubelet:
	W1205 00:04:51.506307  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:04:51.506315  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651     658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
	W1205 00:04:51.506322  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748     658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:04:51.506328  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143     658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:04:51.506334  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	I1205 00:04:51.506344  221677 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:51.506350  221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:04:57.451637  216030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:04:57.463556  216030 api_server.go:72] duration metric: took 5m57.25534682s to wait for apiserver process to appear ...
	I1205 00:04:57.463582  216030 api_server.go:88] waiting for apiserver healthz status ...
	I1205 00:04:57.463617  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:04:57.463679  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:04:57.502613  216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:57.502634  216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:57.502639  216030 cri.go:89] found id: ""
	I1205 00:04:57.502646  216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
	I1205 00:04:57.502706  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.506578  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.510329  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 00:04:57.510403  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:04:57.549412  216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:57.549434  216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:57.549439  216030 cri.go:89] found id: ""
	I1205 00:04:57.549446  216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
	I1205 00:04:57.549522  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.553176  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.556561  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 00:04:57.556630  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:04:57.606322  216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:57.606344  216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:57.606349  216030 cri.go:89] found id: ""
	I1205 00:04:57.606356  216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
	I1205 00:04:57.606414  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.610546  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.614234  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:04:57.614302  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:04:57.657522  216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:57.657543  216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:57.657549  216030 cri.go:89] found id: ""
	I1205 00:04:57.657556  216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
	I1205 00:04:57.657619  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.661379  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.664752  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:04:57.664830  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:04:57.712770  216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:57.712861  216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:57.712880  216030 cri.go:89] found id: ""
	I1205 00:04:57.712898  216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
	I1205 00:04:57.712996  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.717580  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.721738  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:04:57.721819  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:04:57.759280  216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:57.759302  216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:57.759307  216030 cri.go:89] found id: ""
	I1205 00:04:57.759314  216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
	I1205 00:04:57.759371  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.763240  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.766739  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 00:04:57.766823  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:04:57.804341  216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:57.804366  216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:57.804372  216030 cri.go:89] found id: ""
	I1205 00:04:57.804379  216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
	I1205 00:04:57.804439  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.808307  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.811971  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 00:04:57.812044  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 00:04:57.865535  216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:57.865556  216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:57.865561  216030 cri.go:89] found id: ""
	I1205 00:04:57.865568  216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
	I1205 00:04:57.865627  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.869504  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.872895  216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 00:04:57.873022  216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 00:04:57.913425  216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:57.913449  216030 cri.go:89] found id: ""
	I1205 00:04:57.913463  216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
	I1205 00:04:57.913526  216030 ssh_runner.go:195] Run: which crictl
	I1205 00:04:57.917503  216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
	I1205 00:04:57.917529  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
	I1205 00:04:57.959718  216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
	I1205 00:04:57.959742  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
	I1205 00:04:58.030401  216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
	I1205 00:04:58.030436  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
	I1205 00:04:58.089905  216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
	I1205 00:04:58.089933  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
	I1205 00:04:58.129773  216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
	I1205 00:04:58.129861  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
	I1205 00:04:58.170834  216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
	I1205 00:04:58.170863  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
	I1205 00:04:58.217420  216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
	I1205 00:04:58.217449  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
	I1205 00:04:58.264707  216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
	I1205 00:04:58.264735  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
	I1205 00:04:58.314661  216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
	I1205 00:04:58.314686  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
	I1205 00:04:58.372507  216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
	I1205 00:04:58.372541  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
	I1205 00:04:58.414881  216030 logs.go:123] Gathering logs for kubelet ...
	I1205 00:04:58.414910  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 00:04:58.477133  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161     664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:58.477409  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640     664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:58.477645  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
	W1205 00:04:58.483742  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.484031  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.486976  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.489032  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.489223  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.489549  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.490248  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.490681  216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800     664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
	W1205 00:04:58.493488  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.494069  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.494382  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.494707  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.494888  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.495213  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.495824  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.498358  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.498692  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.498876  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.499205  216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.499388  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.499711  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.500296  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.500479  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.500805  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.500988  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.501317  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.501505  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.501834  216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.502157  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.504721  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1205 00:04:58.505045  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.505250  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.505435  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.505771  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.505960  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.506540  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.506865  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.507048  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.507371  216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.507552  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.507880  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.508061  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.508388  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.508586  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.508910  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.509240  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.509423  216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.509780  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.509978  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.510307  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.510488  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.510817  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.511000  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.511326  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.511509  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:58.511836  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:58.514252  216030 logs.go:138] Found kubelet problem: Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1205 00:04:58.514266  216030 logs.go:123] Gathering logs for dmesg ...
	I1205 00:04:58.514280  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:04:58.534466  216030 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:04:58.534492  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 00:04:58.682096  216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
	I1205 00:04:58.682123  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
	I1205 00:04:58.725405  216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
	I1205 00:04:58.725431  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
	I1205 00:04:58.773720  216030 logs.go:123] Gathering logs for containerd ...
	I1205 00:04:58.773748  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 00:04:58.836186  216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
	I1205 00:04:58.836222  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
	I1205 00:04:58.899828  216030 logs.go:123] Gathering logs for container status ...
	I1205 00:04:58.899854  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 00:04:58.942941  216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
	I1205 00:04:58.942971  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
	I1205 00:04:59.018047  216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
	I1205 00:04:59.018114  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
	I1205 00:04:59.103130  216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
	I1205 00:04:59.103163  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
	I1205 00:04:59.151511  216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
	I1205 00:04:59.151539  216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
	I1205 00:04:59.192352  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:59.192377  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 00:04:59.192481  216030 out.go:270] X Problems detected in kubelet:
	W1205 00:04:59.192494  216030 out.go:270]   Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:59.192519  216030 out.go:270]   Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:59.192537  216030 out.go:270]   Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1205 00:04:59.192564  216030 out.go:270]   Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	W1205 00:04:59.192574  216030 out.go:270]   Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	I1205 00:04:59.192585  216030 out.go:358] Setting ErrFile to fd 2...
	I1205 00:04:59.192591  216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:01.508204  221677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 00:05:01.522927  221677 api_server.go:72] duration metric: took 4m31.747740213s to wait for apiserver process to appear ...
	I1205 00:05:01.522953  221677 api_server.go:88] waiting for apiserver healthz status ...
	I1205 00:05:01.522997  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1205 00:05:01.523070  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 00:05:01.570928  221677 cri.go:89] found id: "ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
	I1205 00:05:01.570955  221677 cri.go:89] found id: "e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
	I1205 00:05:01.570961  221677 cri.go:89] found id: ""
	I1205 00:05:01.570969  221677 logs.go:282] 2 containers: [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c]
	I1205 00:05:01.571031  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.575102  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.579218  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1205 00:05:01.579387  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 00:05:01.630856  221677 cri.go:89] found id: "7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
	I1205 00:05:01.630879  221677 cri.go:89] found id: "55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
	I1205 00:05:01.630884  221677 cri.go:89] found id: ""
	I1205 00:05:01.630892  221677 logs.go:282] 2 containers: [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778]
	I1205 00:05:01.630954  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.635207  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.639199  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1205 00:05:01.639278  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 00:05:01.685416  221677 cri.go:89] found id: "340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
	I1205 00:05:01.685444  221677 cri.go:89] found id: "3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
	I1205 00:05:01.685449  221677 cri.go:89] found id: ""
	I1205 00:05:01.685460  221677 logs.go:282] 2 containers: [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9]
	I1205 00:05:01.685573  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.690213  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.694387  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1205 00:05:01.694473  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 00:05:01.743952  221677 cri.go:89] found id: "253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
	I1205 00:05:01.743979  221677 cri.go:89] found id: "627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
	I1205 00:05:01.743984  221677 cri.go:89] found id: ""
	I1205 00:05:01.743993  221677 logs.go:282] 2 containers: [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6]
	I1205 00:05:01.744058  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.748795  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.753296  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1205 00:05:01.753409  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 00:05:01.803333  221677 cri.go:89] found id: "80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
	I1205 00:05:01.803359  221677 cri.go:89] found id: "f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
	I1205 00:05:01.803366  221677 cri.go:89] found id: ""
	I1205 00:05:01.803375  221677 logs.go:282] 2 containers: [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed]
	I1205 00:05:01.803474  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.808123  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.812320  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 00:05:01.812434  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 00:05:01.869520  221677 cri.go:89] found id: "13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
	I1205 00:05:01.869542  221677 cri.go:89] found id: "8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
	I1205 00:05:01.869547  221677 cri.go:89] found id: ""
	I1205 00:05:01.869555  221677 logs.go:282] 2 containers: [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654]
	I1205 00:05:01.869655  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.874792  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.878997  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1205 00:05:01.879103  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 00:05:01.919002  221677 cri.go:89] found id: "0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
	I1205 00:05:01.919029  221677 cri.go:89] found id: "fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
	I1205 00:05:01.919035  221677 cri.go:89] found id: ""
	I1205 00:05:01.919042  221677 logs.go:282] 2 containers: [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be]
	I1205 00:05:01.919198  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.924478  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.928689  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 00:05:01.928872  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 00:05:01.970375  221677 cri.go:89] found id: "317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
	I1205 00:05:01.970443  221677 cri.go:89] found id: ""
	I1205 00:05:01.970464  221677 logs.go:282] 1 containers: [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393]
	I1205 00:05:01.970549  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:01.974739  221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1205 00:05:01.974813  221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 00:05:02.022289  221677 cri.go:89] found id: "473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
	I1205 00:05:02.022313  221677 cri.go:89] found id: "73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
	I1205 00:05:02.022318  221677 cri.go:89] found id: ""
	I1205 00:05:02.022327  221677 logs.go:282] 2 containers: [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e]
	I1205 00:05:02.022395  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:02.027944  221677 ssh_runner.go:195] Run: which crictl
	I1205 00:05:02.032280  221677 logs.go:123] Gathering logs for kube-controller-manager [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94] ...
	I1205 00:05:02.032356  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
	I1205 00:05:02.099044  221677 logs.go:123] Gathering logs for kindnet [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3] ...
	I1205 00:05:02.099083  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
	I1205 00:05:02.150515  221677 logs.go:123] Gathering logs for kubelet ...
	I1205 00:05:02.150562  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1205 00:05:02.196355  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.479571     658 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:05:02.196636  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.479778     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:05:02.196816  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480370     658 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:05:02.197036  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:05:02.197232  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651     658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
	W1205 00:05:02.197465  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748     658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:05:02.197645  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143     658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:05:02.197869  221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	I1205 00:05:02.247348  221677 logs.go:123] Gathering logs for describe nodes ...
	I1205 00:05:02.247398  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 00:05:02.401726  221677 logs.go:123] Gathering logs for kube-apiserver [e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c] ...
	I1205 00:05:02.401760  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
	I1205 00:05:02.471391  221677 logs.go:123] Gathering logs for etcd [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502] ...
	I1205 00:05:02.471421  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
	I1205 00:05:02.526319  221677 logs.go:123] Gathering logs for coredns [3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9] ...
	I1205 00:05:02.526353  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
	I1205 00:05:02.566270  221677 logs.go:123] Gathering logs for kube-scheduler [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64] ...
	I1205 00:05:02.566299  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
	I1205 00:05:02.610704  221677 logs.go:123] Gathering logs for kube-proxy [f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed] ...
	I1205 00:05:02.610788  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
	I1205 00:05:02.656045  221677 logs.go:123] Gathering logs for storage-provisioner [73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e] ...
	I1205 00:05:02.656073  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
	I1205 00:05:02.697643  221677 logs.go:123] Gathering logs for containerd ...
	I1205 00:05:02.697670  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1205 00:05:02.763297  221677 logs.go:123] Gathering logs for container status ...
	I1205 00:05:02.763334  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 00:05:02.805439  221677 logs.go:123] Gathering logs for kube-apiserver [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad] ...
	I1205 00:05:02.805474  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
	I1205 00:05:02.857992  221677 logs.go:123] Gathering logs for coredns [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc] ...
	I1205 00:05:02.858025  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
	I1205 00:05:02.898798  221677 logs.go:123] Gathering logs for kube-scheduler [627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6] ...
	I1205 00:05:02.898833  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
	I1205 00:05:02.958129  221677 logs.go:123] Gathering logs for kube-proxy [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c] ...
	I1205 00:05:02.958159  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
	I1205 00:05:03.007046  221677 logs.go:123] Gathering logs for kube-controller-manager [8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654] ...
	I1205 00:05:03.007080  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
	I1205 00:05:03.091201  221677 logs.go:123] Gathering logs for kindnet [fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be] ...
	I1205 00:05:03.091285  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
	I1205 00:05:03.133446  221677 logs.go:123] Gathering logs for storage-provisioner [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8] ...
	I1205 00:05:03.133475  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
	I1205 00:05:03.171656  221677 logs.go:123] Gathering logs for dmesg ...
	I1205 00:05:03.171685  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 00:05:03.188416  221677 logs.go:123] Gathering logs for etcd [55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778] ...
	I1205 00:05:03.188446  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
	I1205 00:05:03.240042  221677 logs.go:123] Gathering logs for kubernetes-dashboard [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393] ...
	I1205 00:05:03.240071  221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
	I1205 00:05:03.302489  221677 out.go:358] Setting ErrFile to fd 2...
	I1205 00:05:03.302514  221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1205 00:05:03.302593  221677 out.go:270] X Problems detected in kubelet:
	W1205 00:05:03.302608  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:05:03.302619  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651     658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
	W1205 00:05:03.302645  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748     658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	W1205 00:05:03.302653  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143     658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
	W1205 00:05:03.302659  221677 out.go:270]   Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360     658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
	I1205 00:05:03.302664  221677 out.go:358] Setting ErrFile to fd 2...
	I1205 00:05:03.302670  221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:05:09.194046  216030 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1205 00:05:09.205203  216030 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1205 00:05:09.208473  216030 out.go:201] 
	W1205 00:05:09.210824  216030 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1205 00:05:09.210861  216030 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1205 00:05:09.210876  216030 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1205 00:05:09.210882  216030 out.go:270] * 
	W1205 00:05:09.211748  216030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 00:05:09.215055  216030 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e35397ca54a10       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   88a4f6f081a06       dashboard-metrics-scraper-8d5bb5db8-z9qx4
	61ffbe5238187       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   7890e490c1e7d       storage-provisioner
	eadd97cb808fe       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   9f6eaac45e6b1       kubernetes-dashboard-cd95d586-lgvv5
	2be3a6e2ebc5b       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   7f7f7151c29d7       busybox
	9c39da019cfbd       55b97e1cbb2a3       5 minutes ago       Running             kindnet-cni                 1                   af5717e323083       kindnet-k6vqq
	cf535a7a2872e       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   7890e490c1e7d       storage-provisioner
	355e63aab5c7c       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   dc4078ab288f4       kube-proxy-xh97b
	18e042e221094       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   4c9f69be9184e       coredns-74ff55c5b-vb8kf
	4576860463a38       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   8b0356ea09a06       kube-scheduler-old-k8s-version-066167
	d730ecbd86d8e       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   71daf0b305a43       kube-apiserver-old-k8s-version-066167
	d9b089970902b       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   f66b8154afe5d       etcd-old-k8s-version-066167
	0c57ea5d02a99       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   8da4499d8fc97       kube-controller-manager-old-k8s-version-066167
	77680a8421a1a       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ff60d0688efb6       busybox
	9e6a318a81516       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   fe88c9a9aa083       coredns-74ff55c5b-vb8kf
	3bd39d78282b9       55b97e1cbb2a3       7 minutes ago       Exited              kindnet-cni                 0                   0242dfcaa2f5b       kindnet-k6vqq
	f9c9b2c0e523b       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   fd396678ab46a       kube-proxy-xh97b
	cc6be8b93da47       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   1884db4f94ffc       kube-controller-manager-old-k8s-version-066167
	05ccefe05793d       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   9fa098d318588       kube-scheduler-old-k8s-version-066167
	138be331ccdd2       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   6198991f7b4d9       kube-apiserver-old-k8s-version-066167
	03a793869f775       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   78e47ed3d4723       etcd-old-k8s-version-066167
	
	
	==> containerd <==
	Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.189207482Z" level=info msg="CreateContainer within sandbox \"88a4f6f081a06eefb4d38ffff9384604cfd6ef36de26217ec3d6b89ee3c04d91\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\""
	Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.191390761Z" level=info msg="StartContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\""
	Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.282523237Z" level=info msg="StartContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\" returns successfully"
	Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.309489498Z" level=info msg="shim disconnected" id=e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3 namespace=k8s.io
	Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.309556228Z" level=warning msg="cleaning up after shim disconnected" id=e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3 namespace=k8s.io
	Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.309568339Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 05 00:01:22 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:22.085148265Z" level=info msg="RemoveContainer for \"28d12bf2326887bbf15f7098954f0db9d334df920bfcaa91b02887c4a7151cfa\""
	Dec 05 00:01:22 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:22.091842859Z" level=info msg="RemoveContainer for \"28d12bf2326887bbf15f7098954f0db9d334df920bfcaa91b02887c4a7151cfa\" returns successfully"
	Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.169715770Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.177472631Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.179385389Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.179475807Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.178864306Z" level=info msg="CreateContainer within sandbox \"88a4f6f081a06eefb4d38ffff9384604cfd6ef36de26217ec3d6b89ee3c04d91\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.213737565Z" level=info msg="CreateContainer within sandbox \"88a4f6f081a06eefb4d38ffff9384604cfd6ef36de26217ec3d6b89ee3c04d91\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0\""
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.214663140Z" level=info msg="StartContainer for \"e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0\""
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.321629198Z" level=info msg="StartContainer for \"e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0\" returns successfully"
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.367523756Z" level=info msg="shim disconnected" id=e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0 namespace=k8s.io
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.367789980Z" level=warning msg="cleaning up after shim disconnected" id=e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0 namespace=k8s.io
	Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.367893107Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Dec 05 00:02:43 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:43.351059251Z" level=info msg="RemoveContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\""
	Dec 05 00:02:43 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:43.358671041Z" level=info msg="RemoveContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\" returns successfully"
	Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.169453417Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.178302311Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.180061990Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.180155386Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57304 - 21009 "HINFO IN 2296865237645504130.6773721313697179276. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028776707s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1204 23:59:53.678335       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-04 23:59:23.677696418 +0000 UTC m=+0.095812585) (total time: 30.000512524s):
	Trace[2019727887]: [30.000512524s] [30.000512524s] END
	E1204 23:59:53.678381       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1204 23:59:53.678836       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-04 23:59:23.678432987 +0000 UTC m=+0.096549187) (total time: 30.000384597s):
	Trace[939984059]: [30.000384597s] [30.000384597s] END
	E1204 23:59:53.678967       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1204 23:59:53.679127       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-04 23:59:23.678728941 +0000 UTC m=+0.096845116) (total time: 30.000385946s):
	Trace[911902081]: [30.000385946s] [30.000385946s] END
	E1204 23:59:53.679201       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35542 - 45034 "HINFO IN 8601463055701714850.6186383635598761622. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021848409s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-066167
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-066167
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=old-k8s-version-066167
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_56_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:56:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-066167
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 00:05:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 00:00:15 +0000   Wed, 04 Dec 2024 23:56:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 00:00:15 +0000   Wed, 04 Dec 2024 23:56:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 00:00:15 +0000   Wed, 04 Dec 2024 23:56:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 00:00:15 +0000   Wed, 04 Dec 2024 23:57:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-066167
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 63394a44c16444a7a3bcf859a64f3a4b
	  System UUID:                1119a8be-28eb-41ec-878c-8018329a0e7b
	  Boot ID:                    a4788b5f-5e14-4e80-9d00-4606b5d89fd6
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 coredns-74ff55c5b-vb8kf                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m1s
	  kube-system                 etcd-old-k8s-version-066167                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m9s
	  kube-system                 kindnet-k6vqq                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m1s
	  kube-system                 kube-apiserver-old-k8s-version-066167             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-controller-manager-old-k8s-version-066167    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-proxy-xh97b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-scheduler-old-k8s-version-066167             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 metrics-server-9975d5f86-ksvdj                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m32s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-z9qx4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-lgvv5               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m28s (x5 over 8m28s)  kubelet     Node old-k8s-version-066167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s (x4 over 8m28s)  kubelet     Node old-k8s-version-066167 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s (x4 over 8m28s)  kubelet     Node old-k8s-version-066167 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m10s                  kubelet     Node old-k8s-version-066167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m10s                  kubelet     Node old-k8s-version-066167 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m10s                  kubelet     Node old-k8s-version-066167 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m1s                   kubelet     Node old-k8s-version-066167 status is now: NodeReady
	  Normal  Starting                 8m                     kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m3s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-066167 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-066167 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)    kubelet     Node old-k8s-version-066167 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Dec 4 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.436388] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024772] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.027958] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.026858] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.650973] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.181261] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 4 23:48] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 5 00:00] hrtimer: interrupt took 8247705 ns
	
	
	==> etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] <==
	2024-12-04 23:56:43.855411 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2024-12-04 23:56:43.855460 I | embed: listening for peers on 192.168.76.2:2380
	raft2024/12/04 23:56:43 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/12/04 23:56:43 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/12/04 23:56:43 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/12/04 23:56:43 INFO: ea7e25599daad906 became leader at term 2
	raft2024/12/04 23:56:43 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-12-04 23:56:43.903346 I | etcdserver: setting up the initial cluster version to 3.4
	2024-12-04 23:56:43.904283 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-12-04 23:56:43.904329 I | etcdserver/api: enabled capabilities for version 3.4
	2024-12-04 23:56:43.904368 I | etcdserver: published {Name:old-k8s-version-066167 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-12-04 23:56:43.904407 I | embed: ready to serve client requests
	2024-12-04 23:56:43.905904 I | embed: serving client requests on 192.168.76.2:2379
	2024-12-04 23:56:43.906172 I | embed: ready to serve client requests
	2024-12-04 23:56:43.907324 I | embed: serving client requests on 127.0.0.1:2379
	2024-12-04 23:57:07.093887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:57:16.521857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:57:26.522077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:57:36.522112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:57:46.522195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:57:56.522862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:58:06.522337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:58:16.522029 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:58:26.522160 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-04 23:58:36.521975 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] <==
	2024-12-05 00:01:09.880500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:01:19.880362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:01:29.880399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:01:39.880494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:01:49.880592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:01:59.880539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:02:09.880356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:02:19.880499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:02:29.880348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:02:39.880433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:02:49.880359 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:02:59.880449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:03:09.880502 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:03:19.880527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:03:29.880391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:03:39.880419 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:03:49.880422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:03:59.880551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:04:09.880381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:04:19.880496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:04:29.880380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:04:39.880318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:04:49.880547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:04:59.880543 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-12-05 00:05:09.880568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 00:05:11 up  1:47,  0 users,  load average: 2.40, 2.88, 2.81
	Linux old-k8s-version-066167 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] <==
	I1204 23:57:13.417500       1 main.go:148] setting mtu 1500 for CNI 
	I1204 23:57:13.417517       1 main.go:178] kindnetd IP family: "ipv4"
	I1204 23:57:13.417534       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1204 23:57:13.809731       1 controller.go:361] Starting controller kube-network-policies
	I1204 23:57:13.810149       1 controller.go:365] Waiting for informer caches to sync
	I1204 23:57:13.810263       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1204 23:57:14.014076       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1204 23:57:14.014105       1 metrics.go:61] Registering metrics
	I1204 23:57:14.014379       1 controller.go:401] Syncing nftables rules
	I1204 23:57:23.809571       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:57:23.809706       1 main.go:301] handling current node
	I1204 23:57:33.809999       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:57:33.810205       1 main.go:301] handling current node
	I1204 23:57:43.818780       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:57:43.818813       1 main.go:301] handling current node
	I1204 23:57:53.813449       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:57:53.813481       1 main.go:301] handling current node
	I1204 23:58:03.810116       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:58:03.810171       1 main.go:301] handling current node
	I1204 23:58:13.810035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:58:13.810135       1 main.go:301] handling current node
	I1204 23:58:23.812509       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:58:23.812597       1 main.go:301] handling current node
	I1204 23:58:33.809527       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1204 23:58:33.809622       1 main.go:301] handling current node
	
	
	==> kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] <==
	I1205 00:03:06.118129       1 main.go:301] handling current node
	I1205 00:03:16.110260       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:03:16.110296       1 main.go:301] handling current node
	I1205 00:03:26.110529       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:03:26.110568       1 main.go:301] handling current node
	I1205 00:03:36.118125       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:03:36.118162       1 main.go:301] handling current node
	I1205 00:03:46.118155       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:03:46.118196       1 main.go:301] handling current node
	I1205 00:03:56.118159       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:03:56.118193       1 main.go:301] handling current node
	I1205 00:04:06.116282       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:04:06.116335       1 main.go:301] handling current node
	I1205 00:04:16.113050       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:04:16.113088       1 main.go:301] handling current node
	I1205 00:04:26.109817       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:04:26.109852       1 main.go:301] handling current node
	I1205 00:04:36.114087       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:04:36.114124       1 main.go:301] handling current node
	I1205 00:04:46.114628       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:04:46.114667       1 main.go:301] handling current node
	I1205 00:04:56.117261       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:04:56.117295       1 main.go:301] handling current node
	I1205 00:05:06.117234       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1205 00:05:06.117269       1 main.go:301] handling current node
	
	
	==> kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] <==
	I1204 23:56:51.367354       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1204 23:56:51.367383       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1204 23:56:51.378085       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1204 23:56:51.382826       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1204 23:56:51.382850       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1204 23:56:51.864208       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 23:56:51.914540       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1204 23:56:52.018382       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1204 23:56:52.019763       1 controller.go:606] quota admission added evaluator for: endpoints
	I1204 23:56:52.024982       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:56:53.073092       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1204 23:56:53.368536       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1204 23:56:53.430567       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1204 23:57:01.869049       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:57:10.226010       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1204 23:57:10.231944       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1204 23:57:23.496853       1 client.go:360] parsed scheme: "passthrough"
	I1204 23:57:23.496914       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1204 23:57:23.496922       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1204 23:58:04.681904       1 client.go:360] parsed scheme: "passthrough"
	I1204 23:58:04.681948       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1204 23:58:04.681957       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1204 23:58:35.536497       1 client.go:360] parsed scheme: "passthrough"
	I1204 23:58:35.536561       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1204 23:58:35.536570       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] <==
	I1205 00:01:10.616841       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1205 00:01:10.616852       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1205 00:01:43.793553       1 client.go:360] parsed scheme: "passthrough"
	I1205 00:01:43.793636       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1205 00:01:43.793646       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1205 00:02:25.154125       1 client.go:360] parsed scheme: "passthrough"
	I1205 00:02:25.154171       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1205 00:02:25.154179       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1205 00:02:25.392155       1 handler_proxy.go:102] no RequestInfo found in the context
	E1205 00:02:25.392230       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 00:02:25.392245       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 00:03:09.221663       1 client.go:360] parsed scheme: "passthrough"
	I1205 00:03:09.221707       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1205 00:03:09.221715       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1205 00:03:50.421710       1 client.go:360] parsed scheme: "passthrough"
	I1205 00:03:50.421761       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1205 00:03:50.421770       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1205 00:04:22.869903       1 handler_proxy.go:102] no RequestInfo found in the context
	E1205 00:04:22.870001       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 00:04:22.870014       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 00:04:32.361912       1 client.go:360] parsed scheme: "passthrough"
	I1205 00:04:32.361957       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1205 00:04:32.361965       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] <==
	W1205 00:00:46.528820       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:01:12.575220       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:01:18.179416       1 request.go:655] Throttling request took 1.048410278s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W1205 00:01:19.031149       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:01:43.077409       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:01:50.681678       1 request.go:655] Throttling request took 1.04855418s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W1205 00:01:51.533140       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:02:13.579363       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:02:23.183664       1 request.go:655] Throttling request took 1.047807439s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1205 00:02:24.035209       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:02:44.081609       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:02:55.685732       1 request.go:655] Throttling request took 1.0477809s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1205 00:02:56.537503       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:03:14.583446       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:03:28.187792       1 request.go:655] Throttling request took 1.047765071s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1205 00:03:29.039255       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:03:45.086459       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:04:00.689645       1 request.go:655] Throttling request took 1.048234915s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1205 00:04:01.542110       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:04:15.588387       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:04:33.192968       1 request.go:655] Throttling request took 1.048389978s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W1205 00:04:34.078846       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 00:04:46.090421       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1205 00:05:05.729246       1 request.go:655] Throttling request took 1.04854089s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W1205 00:05:06.580789       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] <==
	I1204 23:57:10.228139       1 shared_informer.go:247] Caches are synced for TTL 
	I1204 23:57:10.244736       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-066167" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1204 23:57:10.293607       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1204 23:57:10.316660       1 shared_informer.go:247] Caches are synced for resource quota 
	I1204 23:57:10.317067       1 shared_informer.go:247] Caches are synced for resource quota 
	I1204 23:57:10.321571       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1204 23:57:10.325353       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k6vqq"
	I1204 23:57:10.325552       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xh97b"
	I1204 23:57:10.325661       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vb8kf"
	I1204 23:57:10.350178       1 shared_informer.go:247] Caches are synced for expand 
	I1204 23:57:10.351214       1 shared_informer.go:247] Caches are synced for PV protection 
	I1204 23:57:10.404370       1 shared_informer.go:247] Caches are synced for attach detach 
	I1204 23:57:10.443160       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vgvw4"
	I1204 23:57:10.452896       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	E1204 23:57:10.572380       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a57a36c8-4b4a-4b45-8bb9-ec5b0cc99311", ResourceVersion:"398", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63868953414, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241023-a345ebe4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001e2e660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001e2e680)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001e2e6a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001e2e6c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001e2e6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e2e700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e2e720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e2e740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241023-a345ebe4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001e2e760)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001e2e7a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001e20720), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001de92c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000175880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000167218)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001de9310)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1204 23:57:10.582567       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1204 23:57:10.849796       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1204 23:57:10.849825       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1204 23:57:10.885014       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1204 23:57:11.800291       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1204 23:57:11.843210       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-vgvw4"
	I1204 23:57:15.206454       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1204 23:58:38.746036       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1204 23:58:39.050155       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E1204 23:58:39.129164       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] <==
	I1204 23:59:24.720336       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1204 23:59:24.720406       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1204 23:59:24.887906       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1204 23:59:24.888168       1 server_others.go:185] Using iptables Proxier.
	I1204 23:59:24.914318       1 server.go:650] Version: v1.20.0
	I1204 23:59:24.935301       1 config.go:315] Starting service config controller
	I1204 23:59:24.935318       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1204 23:59:24.935339       1 config.go:224] Starting endpoint slice config controller
	I1204 23:59:24.935343       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1204 23:59:25.051395       1 shared_informer.go:247] Caches are synced for service config 
	I1204 23:59:25.173242       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] <==
	I1204 23:57:11.507069       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1204 23:57:11.507165       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1204 23:57:11.538723       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1204 23:57:11.538818       1 server_others.go:185] Using iptables Proxier.
	I1204 23:57:11.539152       1 server.go:650] Version: v1.20.0
	I1204 23:57:11.539974       1 config.go:315] Starting service config controller
	I1204 23:57:11.539988       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1204 23:57:11.540006       1 config.go:224] Starting endpoint slice config controller
	I1204 23:57:11.540010       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1204 23:57:11.640113       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1204 23:57:11.640187       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] <==
	W1204 23:56:50.567444       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:56:50.632475       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:56:50.632669       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:56:50.641571       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1204 23:56:50.652126       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1204 23:56:50.660633       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:56:50.666496       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 23:56:50.666852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:56:50.667052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:56:50.667317       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:56:50.667557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 23:56:50.667807       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:50.667887       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:56:50.671742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 23:56:50.671919       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 23:56:50.672048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:56:50.672178       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:56:51.547484       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 23:56:51.604108       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:56:51.604460       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:56:51.618354       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:56:51.652239       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:56:51.657470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:56:51.704258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1204 23:56:54.832918       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] <==
	I1204 23:59:14.260397       1 serving.go:331] Generated self-signed cert in-memory
	W1204 23:59:21.821955       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 23:59:21.821981       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:59:21.821990       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:59:21.821995       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:59:22.005867       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1204 23:59:22.014441       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1204 23:59:22.014493       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:59:22.026179       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:59:22.227280       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: I1205 00:03:49.168556     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: I1205 00:04:01.168532     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: I1205 00:04:14.168490     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: I1205 00:04:25.168400     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: I1205 00:04:36.172319     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: I1205 00:04:51.168499     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.180383     664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.180456     664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181002     664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-7rsv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-ksvdj_kube-system(daace8e
a-0220-4827-83d0-a829c2b20a57): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Dec 05 00:05:06 old-k8s-version-066167 kubelet[664]: I1205 00:05:06.168524     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
	Dec 05 00:05:06 old-k8s-version-066167 kubelet[664]: E1205 00:05:06.169386     664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
	Dec 05 00:05:07 old-k8s-version-066167 kubelet[664]: E1205 00:05:07.169063     664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] <==
	2024/12/04 23:59:49 Starting overwatch
	2024/12/04 23:59:49 Using namespace: kubernetes-dashboard
	2024/12/04 23:59:49 Using in-cluster config to connect to apiserver
	2024/12/04 23:59:49 Using secret token for csrf signing
	2024/12/04 23:59:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/04 23:59:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/04 23:59:49 Successful initial request to the apiserver, version: v1.20.0
	2024/12/04 23:59:49 Generating JWE encryption key
	2024/12/04 23:59:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/04 23:59:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/04 23:59:49 Initializing JWE encryption key from synchronized object
	2024/12/04 23:59:49 Creating in-cluster Sidecar client
	2024/12/04 23:59:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/04 23:59:49 Serving insecurely on HTTP port: 9090
	2024/12/05 00:00:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:00:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:01:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:01:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:02:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:02:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:03:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:03:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:04:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/05 00:04:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] <==
	I1205 00:00:10.468392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 00:00:10.526019       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 00:00:10.526220       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 00:00:28.091165       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 00:00:28.093949       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-066167_ff66112d-c4a2-4229-897e-56fd3d5df8b6!
	I1205 00:00:28.100964       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"33f61396-308b-4034-a659-b486fa025384", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-066167_ff66112d-c4a2-4229-897e-56fd3d5df8b6 became leader
	I1205 00:00:28.194971       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-066167_ff66112d-c4a2-4229-897e-56fd3d5df8b6!
	
	
	==> storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] <==
	I1204 23:59:24.782347       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1204 23:59:54.784716       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-066167 -n old-k8s-version-066167
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-066167 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-ksvdj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-066167 describe pod metrics-server-9975d5f86-ksvdj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-066167 describe pod metrics-server-9975d5f86-ksvdj: exit status 1 (101.40223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-ksvdj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-066167 describe pod metrics-server-9975d5f86-ksvdj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.70s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 6.34
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.21
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 219.5
29 TestAddons/serial/Volcano 39.95
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.87
35 TestAddons/parallel/Registry 16.38
36 TestAddons/parallel/Ingress 19.86
37 TestAddons/parallel/InspektorGadget 10.99
38 TestAddons/parallel/MetricsServer 6.91
40 TestAddons/parallel/CSI 65.14
41 TestAddons/parallel/Headlamp 16.81
42 TestAddons/parallel/CloudSpanner 6.65
43 TestAddons/parallel/LocalPath 53.76
44 TestAddons/parallel/NvidiaDevicePlugin 5.53
45 TestAddons/parallel/Yakd 11.91
47 TestAddons/StoppedEnableDisable 12.2
48 TestCertOptions 34.69
49 TestCertExpiration 232.1
51 TestForceSystemdFlag 37.05
52 TestForceSystemdEnv 57.19
53 TestDockerEnvContainerd 46.02
58 TestErrorSpam/setup 31.68
59 TestErrorSpam/start 0.7
60 TestErrorSpam/status 1.04
61 TestErrorSpam/pause 1.81
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 61.66
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.8
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.14
75 TestFunctional/serial/CacheCmd/cache/add_local 1.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.08
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.02
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 42.38
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.79
86 TestFunctional/serial/LogsFileCmd 1.69
87 TestFunctional/serial/InvalidService 4.03
89 TestFunctional/parallel/ConfigCmd 0.53
90 TestFunctional/parallel/DashboardCmd 8.92
91 TestFunctional/parallel/DryRun 0.52
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.28
97 TestFunctional/parallel/ServiceCmdConnect 8.82
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 26.68
101 TestFunctional/parallel/SSHCmd 0.66
102 TestFunctional/parallel/CpCmd 2.1
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.61
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.86
113 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
116 TestFunctional/parallel/Version/short 0.09
117 TestFunctional/parallel/Version/components 1.34
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
126 TestFunctional/parallel/ImageCommands/Setup 0.7
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.14
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/MountCmd/any-port 7.35
144 TestFunctional/parallel/MountCmd/specific-port 2.17
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
146 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
148 TestFunctional/parallel/ProfileCmd/profile_list 0.49
149 TestFunctional/parallel/ServiceCmd/List 0.61
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.47
154 TestFunctional/parallel/ServiceCmd/URL 0.5
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 114.09
162 TestMultiControlPlane/serial/DeployApp 45.81
163 TestMultiControlPlane/serial/PingHostFromPods 1.66
164 TestMultiControlPlane/serial/AddWorkerNode 21.37
165 TestMultiControlPlane/serial/NodeLabels 0.1
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1
167 TestMultiControlPlane/serial/CopyFile 18.56
168 TestMultiControlPlane/serial/StopSecondaryNode 12.8
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
170 TestMultiControlPlane/serial/RestartSecondaryNode 17.95
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.58
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.22
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
175 TestMultiControlPlane/serial/StopCluster 36.02
176 TestMultiControlPlane/serial/RestartCluster 78.81
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
178 TestMultiControlPlane/serial/AddSecondaryNode 43.59
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
183 TestJSONOutput/start/Command 60.2
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.75
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
208 TestKicCustomNetwork/create_custom_network 39.11
209 TestKicCustomNetwork/use_default_bridge_network 32.42
210 TestKicExistingNetwork 34.14
211 TestKicCustomSubnet 33.46
212 TestKicStaticIP 33.78
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 64.62
217 TestMountStart/serial/StartWithMountFirst 6.11
218 TestMountStart/serial/VerifyMountFirst 0.25
219 TestMountStart/serial/StartWithMountSecond 8.68
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.18
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 75.36
229 TestMultiNode/serial/DeployApp2Nodes 19.07
230 TestMultiNode/serial/PingHostFrom2Pods 1.09
231 TestMultiNode/serial/AddNode 19.94
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.69
234 TestMultiNode/serial/CopyFile 9.92
235 TestMultiNode/serial/StopNode 2.27
236 TestMultiNode/serial/StartAfterStop 9.7
237 TestMultiNode/serial/RestartKeepsNodes 90.96
238 TestMultiNode/serial/DeleteNode 5.58
239 TestMultiNode/serial/StopMultiNode 23.91
240 TestMultiNode/serial/RestartMultiNode 48.98
241 TestMultiNode/serial/ValidateNameConflict 33.61
246 TestPreload 126.19
248 TestScheduledStopUnix 104.99
251 TestInsufficientStorage 9.95
252 TestRunningBinaryUpgrade 85.11
254 TestKubernetesUpgrade 350.97
255 TestMissingContainerUpgrade 173.99
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 40.2
259 TestNoKubernetes/serial/StartWithStopK8s 7.8
260 TestNoKubernetes/serial/Start 11.96
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
262 TestNoKubernetes/serial/ProfileList 1.22
263 TestNoKubernetes/serial/Stop 1.2
264 TestNoKubernetes/serial/StartNoArgs 6.84
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
266 TestStoppedBinaryUpgrade/Setup 1
267 TestStoppedBinaryUpgrade/Upgrade 107.17
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
277 TestPause/serial/Start 71.15
278 TestPause/serial/SecondStartNoReconfiguration 6.88
279 TestPause/serial/Pause 1.17
280 TestPause/serial/VerifyStatus 0.5
281 TestPause/serial/Unpause 0.95
282 TestPause/serial/PauseAgain 1.09
283 TestPause/serial/DeletePaused 2.94
284 TestPause/serial/VerifyDeletedResources 0.86
292 TestNetworkPlugins/group/false 6.17
297 TestStartStop/group/old-k8s-version/serial/FirstStart 137.41
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.64
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.69
300 TestStartStop/group/old-k8s-version/serial/Stop 12.41
302 TestStartStop/group/no-preload/serial/FirstStart 74.6
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
305 TestStartStop/group/no-preload/serial/DeployApp 10.49
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.36
307 TestStartStop/group/no-preload/serial/Stop 12.08
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
309 TestStartStop/group/no-preload/serial/SecondStart 302.84
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
313 TestStartStop/group/old-k8s-version/serial/Pause 3.11
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/embed-certs/serial/FirstStart 58.08
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.21
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.58
319 TestStartStop/group/no-preload/serial/Pause 3.86
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.4
322 TestStartStop/group/embed-certs/serial/DeployApp 8.37
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
324 TestStartStop/group/embed-certs/serial/Stop 12.14
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.41
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.18
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
329 TestStartStop/group/embed-certs/serial/SecondStart 297.34
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 275.31
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.01
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
339 TestStartStop/group/newest-cni/serial/FirstStart 43.31
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
341 TestStartStop/group/embed-certs/serial/Pause 3.83
342 TestNetworkPlugins/group/auto/Start 54.89
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.2
345 TestStartStop/group/newest-cni/serial/Stop 1.33
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
347 TestStartStop/group/newest-cni/serial/SecondStart 15.08
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
351 TestStartStop/group/newest-cni/serial/Pause 3.19
352 TestNetworkPlugins/group/auto/KubeletFlags 0.35
353 TestNetworkPlugins/group/auto/NetCatPod 9.41
354 TestNetworkPlugins/group/kindnet/Start 65.95
355 TestNetworkPlugins/group/auto/DNS 0.22
356 TestNetworkPlugins/group/auto/Localhost 0.19
357 TestNetworkPlugins/group/auto/HairPin 0.21
358 TestNetworkPlugins/group/calico/Start 71.98
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
361 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
362 TestNetworkPlugins/group/kindnet/DNS 0.22
363 TestNetworkPlugins/group/kindnet/Localhost 0.22
364 TestNetworkPlugins/group/kindnet/HairPin 0.14
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/custom-flannel/Start 56
367 TestNetworkPlugins/group/calico/KubeletFlags 0.54
368 TestNetworkPlugins/group/calico/NetCatPod 11.53
369 TestNetworkPlugins/group/calico/DNS 0.32
370 TestNetworkPlugins/group/calico/Localhost 0.24
371 TestNetworkPlugins/group/calico/HairPin 0.25
372 TestNetworkPlugins/group/enable-default-cni/Start 41.94
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.38
375 TestNetworkPlugins/group/custom-flannel/DNS 0.29
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.41
380 TestNetworkPlugins/group/flannel/Start 50.4
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.4
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
384 TestNetworkPlugins/group/bridge/Start 45.16
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
387 TestNetworkPlugins/group/flannel/NetCatPod 10.46
388 TestNetworkPlugins/group/flannel/DNS 0.22
389 TestNetworkPlugins/group/flannel/Localhost 0.2
390 TestNetworkPlugins/group/flannel/HairPin 0.22
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
392 TestNetworkPlugins/group/bridge/NetCatPod 10.48
393 TestNetworkPlugins/group/bridge/DNS 0.44
394 TestNetworkPlugins/group/bridge/Localhost 0.23
395 TestNetworkPlugins/group/bridge/HairPin 0.24
x
+
TestDownloadOnly/v1.20.0/json-events (8.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-418633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-418633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.271594786s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 23:10:57.251422    7736 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1204 23:10:57.251507    7736 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-418633
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-418633: exit status 85 (67.897721ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-418633 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |          |
	|         | -p download-only-418633        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:49
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:49.032839    7741 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:49.033023    7741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:49.033034    7741 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:49.033040    7741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:49.033566    7741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	W1204 23:10:49.033718    7741 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20045-2283/.minikube/config/config.json: open /home/jenkins/minikube-integration/20045-2283/.minikube/config/config.json: no such file or directory
	I1204 23:10:49.034151    7741 out.go:352] Setting JSON to true
	I1204 23:10:49.034901    7741 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3199,"bootTime":1733350650,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1204 23:10:49.034971    7741 start.go:139] virtualization:  
	I1204 23:10:49.037122    7741 out.go:97] [download-only-418633] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1204 23:10:49.037281    7741 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 23:10:49.037381    7741 notify.go:220] Checking for updates...
	I1204 23:10:49.039667    7741 out.go:169] MINIKUBE_LOCATION=20045
	I1204 23:10:49.042282    7741 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:49.043549    7741 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:10:49.044799    7741 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1204 23:10:49.046032    7741 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1204 23:10:49.048803    7741 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 23:10:49.049131    7741 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:49.069262    7741 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:49.069372    7741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:49.414263    7741 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-04 23:10:49.405554262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:10:49.414374    7741 docker.go:318] overlay module found
	I1204 23:10:49.420330    7741 out.go:97] Using the docker driver based on user configuration
	I1204 23:10:49.420355    7741 start.go:297] selected driver: docker
	I1204 23:10:49.420362    7741 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:49.420466    7741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:49.472037    7741 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-04 23:10:49.463845917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:10:49.472230    7741 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:49.472522    7741 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1204 23:10:49.472703    7741 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 23:10:49.474526    7741 out.go:169] Using Docker driver with root privileges
	I1204 23:10:49.476086    7741 cni.go:84] Creating CNI manager for ""
	I1204 23:10:49.476144    7741 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1204 23:10:49.476161    7741 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:49.476250    7741 start.go:340] cluster config:
	{Name:download-only-418633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-418633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:49.477960    7741 out.go:97] Starting "download-only-418633" primary control-plane node in "download-only-418633" cluster
	I1204 23:10:49.477980    7741 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1204 23:10:49.479301    7741 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:49.479325    7741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1204 23:10:49.479470    7741 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:49.495984    7741 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:49.496145    7741 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:49.496238    7741 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:49.572454    7741 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1204 23:10:49.572481    7741 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:49.572668    7741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1204 23:10:49.574647    7741 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 23:10:49.574667    7741 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1204 23:10:49.665318    7741 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1204 23:10:53.863078    7741 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	
	
	* The control-plane node download-only-418633 host does not exist
	  To start a cluster, run: "minikube start -p download-only-418633"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-418633
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (6.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-146629 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-146629 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.342327097s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (6.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 23:11:04.000809    7736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1204 23:11:04.000846    7736 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-146629
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-146629: exit status 85 (76.753005ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-418633 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-418633        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-418633        | download-only-418633 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | -o=json --download-only        | download-only-146629 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-146629        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:57
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:57.702763    7941 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:57.702894    7941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:57.702905    7941 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:57.702910    7941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:57.703160    7941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:10:57.703558    7941 out.go:352] Setting JSON to true
	I1204 23:10:57.704289    7941 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3208,"bootTime":1733350650,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1204 23:10:57.704361    7941 start.go:139] virtualization:  
	I1204 23:10:57.706141    7941 out.go:97] [download-only-146629] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1204 23:10:57.706305    7941 notify.go:220] Checking for updates...
	I1204 23:10:57.707398    7941 out.go:169] MINIKUBE_LOCATION=20045
	I1204 23:10:57.708783    7941 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:57.710220    7941 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:10:57.711514    7941 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1204 23:10:57.712703    7941 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1204 23:10:57.715239    7941 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 23:10:57.715476    7941 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:57.736325    7941 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:57.736459    7941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:57.803948    7941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:57.79328915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-nf
-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:10:57.804066    7941 docker.go:318] overlay module found
	I1204 23:10:57.805500    7941 out.go:97] Using the docker driver based on user configuration
	I1204 23:10:57.805530    7941 start.go:297] selected driver: docker
	I1204 23:10:57.805538    7941 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:57.805648    7941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:57.855727    7941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:57.846430343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:10:57.855925    7941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:57.856192    7941 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1204 23:10:57.856338    7941 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 23:10:57.857838    7941 out.go:169] Using Docker driver with root privileges
	I1204 23:10:57.859003    7941 cni.go:84] Creating CNI manager for ""
	I1204 23:10:57.859067    7941 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1204 23:10:57.859078    7941 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:57.859155    7941 start.go:340] cluster config:
	{Name:download-only-146629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-146629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:57.860556    7941 out.go:97] Starting "download-only-146629" primary control-plane node in "download-only-146629" cluster
	I1204 23:10:57.860574    7941 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1204 23:10:57.861742    7941 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:57.861765    7941 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1204 23:10:57.861875    7941 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:57.878628    7941 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:57.878762    7941 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:57.878784    7941 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1204 23:10:57.878793    7941 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1204 23:10:57.878800    7941 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:57.923330    7941 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1204 23:10:57.923355    7941 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:57.923520    7941 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1204 23:10:57.925027    7941 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1204 23:10:57.925056    7941 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1204 23:10:58.001045    7941 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:5a1c96cd03f848c5b0e8fb66f315acd5 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1204 23:11:02.417377    7941 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1204 23:11:02.417486    7941 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-146629 host does not exist
	  To start a cluster, run: "minikube start -p download-only-146629"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-146629
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 23:11:05.210133    7736 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-778139 --alsologtostderr --binary-mirror http://127.0.0.1:34431 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-778139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-778139
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-458020
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-458020: exit status 85 (89.388271ms)

                                                
                                                
-- stdout --
	* Profile "addons-458020" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-458020"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-458020
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-458020: exit status 85 (69.234867ms)

                                                
                                                
-- stdout --
	* Profile "addons-458020" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-458020"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (219.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-458020 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-458020 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m39.493790062s)
--- PASS: TestAddons/Setup (219.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 58.913653ms
addons_test.go:815: volcano-admission stabilized in 59.051349ms
addons_test.go:807: volcano-scheduler stabilized in 59.123356ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-w82jt" [9da84da0-a0fb-495f-acf3-770fb9bc8f85] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003707566s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-rspx9" [785ebac1-d41a-49a1-802b-8bf47af39625] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00356335s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-hz8w4" [17725bb4-4a8c-4b9f-b43d-5cf291568a84] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003707858s
addons_test.go:842: (dbg) Run:  kubectl --context addons-458020 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-458020 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-458020 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [89075a4a-7dbb-4fc8-9194-c993c6bc8779] Pending
helpers_test.go:344: "test-job-nginx-0" [89075a4a-7dbb-4fc8-9194-c993c6bc8779] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [89075a4a-7dbb-4fc8-9194-c993c6bc8779] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003719276s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable volcano --alsologtostderr -v=1: (11.225007391s)
--- PASS: TestAddons/serial/Volcano (39.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-458020 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-458020 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-458020 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-458020 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8dfb9a80-b829-4895-a561-4199fafc0285] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8dfb9a80-b829-4895-a561-4199fafc0285] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00359025s
addons_test.go:633: (dbg) Run:  kubectl --context addons-458020 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-458020 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-458020 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-458020 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.947959ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-kpndt" [37b2d990-8964-4c5b-b53c-1c1c87d5c783] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006072313s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bfdd8" [1f29e2a1-f40e-411e-ae58-0d36611c5653] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004665103s
addons_test.go:331: (dbg) Run:  kubectl --context addons-458020 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-458020 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-458020 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.411987477s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 ip
2024/12/04 23:15:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-458020 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-458020 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-458020 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7802af81-7dec-4386-9098-a857699be027] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7802af81-7dec-4386-9098-a857699be027] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003788058s
I1204 23:17:26.878722    7736 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-458020 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable ingress-dns --alsologtostderr -v=1: (1.42848777s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable ingress --alsologtostderr -v=1: (7.851147107s)
--- PASS: TestAddons/parallel/Ingress (19.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-265rm" [250fe07b-9327-4697-962d-20232b8a7805] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005334983s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable inspektor-gadget --alsologtostderr -v=1: (5.983133242s)
--- PASS: TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.94599ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vwqrj" [7c9880d0-e738-4d14-8ba9-6dad3e03bb8e] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004496719s
addons_test.go:402: (dbg) Run:  kubectl --context addons-458020 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1204 23:15:55.303558    7736 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1204 23:15:55.311008    7736 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1204 23:15:55.311044    7736 kapi.go:107] duration metric: took 10.276519ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.288326ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-458020 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-458020 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5a4b78b1-d163-489b-9111-ba5d1b59d7f1] Pending
helpers_test.go:344: "task-pv-pod" [5a4b78b1-d163-489b-9111-ba5d1b59d7f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5a4b78b1-d163-489b-9111-ba5d1b59d7f1] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003308765s
addons_test.go:511: (dbg) Run:  kubectl --context addons-458020 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-458020 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-458020 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-458020 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-458020 delete pod task-pv-pod: (1.019875797s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-458020 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-458020 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-458020 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [26af3633-7f77-4783-8003-68c0ea366de2] Pending
helpers_test.go:344: "task-pv-pod-restore" [26af3633-7f77-4783-8003-68c0ea366de2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [26af3633-7f77-4783-8003-68c0ea366de2] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003655304s
addons_test.go:553: (dbg) Run:  kubectl --context addons-458020 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-458020 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-458020 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.05277168s)
--- PASS: TestAddons/parallel/CSI (65.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-458020 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-458020 --alsologtostderr -v=1: (1.031417992s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-vstsm" [a774ef18-d5a4-4267-8431-f54726bf1278] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-vstsm" [a774ef18-d5a4-4267-8431-f54726bf1278] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-vstsm" [a774ef18-d5a4-4267-8431-f54726bf1278] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004097784s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable headlamp --alsologtostderr -v=1: (5.773025533s)
--- PASS: TestAddons/parallel/Headlamp (16.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-4kmfm" [a87aa60f-6528-4582-a955-db86d5ec9f87] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004792158s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-458020 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-458020 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b7c21265-5d31-4cc6-a26a-3df8166c1c2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b7c21265-5d31-4cc6-a26a-3df8166c1c2b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b7c21265-5d31-4cc6-a26a-3df8166c1c2b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003656266s
addons_test.go:906: (dbg) Run:  kubectl --context addons-458020 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 ssh "cat /opt/local-path-provisioner/pvc-5d87f55c-04de-40ed-b323-45d0b30b4870_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-458020 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-458020 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.415564951s)
--- PASS: TestAddons/parallel/LocalPath (53.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dcgxv" [4dc6e454-448c-44b0-8146-2136fd00015e] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005029127s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-m44d9" [3956eea0-158e-446f-b8ee-31c66dbfc9dd] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005459695s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-458020 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-458020 addons disable yakd --alsologtostderr -v=1: (5.898313176s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-458020
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-458020: (11.924217244s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-458020
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-458020
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-458020
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (34.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-516338 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-516338 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.036926835s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-516338 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-516338 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-516338 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-516338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-516338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-516338: (2.016644094s)
--- PASS: TestCertOptions (34.69s)

                                                
                                    
x
+
TestCertExpiration (232.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-688223 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-688223 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.385748521s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-688223 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-688223 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.44106304s)
helpers_test.go:175: Cleaning up "cert-expiration-688223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-688223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-688223: (2.274603834s)
--- PASS: TestCertExpiration (232.10s)

                                                
                                    
x
+
TestForceSystemdFlag (37.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-773661 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-773661 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.487690042s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-773661 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-773661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-773661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-773661: (2.206446574s)
--- PASS: TestForceSystemdFlag (37.05s)

                                                
                                    
x
+
TestForceSystemdEnv (57.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-932373 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-932373 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (54.646828378s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-932373 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-932373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-932373
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-932373: (2.204756268s)
--- PASS: TestForceSystemdEnv (57.19s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.02s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-958322 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-958322 --driver=docker  --container-runtime=containerd: (30.548703456s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-958322"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bFOuQed6THa3/agent.28866" SSH_AGENT_PID="28867" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bFOuQed6THa3/agent.28866" SSH_AGENT_PID="28867" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bFOuQed6THa3/agent.28866" SSH_AGENT_PID="28867" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.098137841s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bFOuQed6THa3/agent.28866" SSH_AGENT_PID="28867" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-958322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-958322
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-958322: (1.948626332s)
--- PASS: TestDockerEnvContainerd (46.02s)

                                                
                                    
x
+
TestErrorSpam/setup (31.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-049186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-049186 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-049186 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-049186 --driver=docker  --container-runtime=containerd: (31.68247474s)
--- PASS: TestErrorSpam/setup (31.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 stop: (1.314720972s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-049186 --log_dir /tmp/nospam-049186 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/test/nested/copy/7736/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-876483 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1204 23:19:45.367088    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:45.373786    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:45.385311    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:45.406814    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:45.448189    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:45.529606    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:45.691258    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:46.012983    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:46.657066    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:47.938377    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:50.499670    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:19:55.621700    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:20:05.864112    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-876483 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m1.660162194s)
--- PASS: TestFunctional/serial/StartWithProxy (61.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 23:20:25.851732    7736 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-876483 --alsologtostderr -v=8
E1204 23:20:26.345913    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-876483 --alsologtostderr -v=8: (5.795048306s)
functional_test.go:663: soft start took 5.797471998s for "functional-876483" cluster.
I1204 23:20:31.647106    7736 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (5.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-876483 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 cache add registry.k8s.io/pause:3.1: (1.587274136s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 cache add registry.k8s.io/pause:3.3: (1.370653055s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 cache add registry.k8s.io/pause:latest: (1.186482328s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-876483 /tmp/TestFunctionalserialCacheCmdcacheadd_local4145520872/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cache add minikube-local-cache-test:functional-876483
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cache delete minikube-local-cache-test:functional-876483
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-876483
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.097671ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 cache reload: (1.111640497s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 kubectl -- --context functional-876483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-876483 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-876483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1204 23:21:07.307284    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-876483 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.379523314s)
functional_test.go:761: restart took 42.379643017s for "functional-876483" cluster.
I1204 23:21:22.263732    7736 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (42.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-876483 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 logs: (1.788713364s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 logs --file /tmp/TestFunctionalserialLogsFileCmd1535186758/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 logs --file /tmp/TestFunctionalserialLogsFileCmd1535186758/001/logs.txt: (1.686547158s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-876483 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-876483
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-876483: exit status 115 (406.780836ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32534 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-876483 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 config get cpus: exit status 14 (60.960587ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 config get cpus: exit status 14 (115.818888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-876483 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-876483 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45796: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-876483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-876483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (219.996023ms)

                                                
                                                
-- stdout --
	* [functional-876483] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:22:09.742733   44895 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:22:09.743822   44895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:22:09.743860   44895 out.go:358] Setting ErrFile to fd 2...
	I1204 23:22:09.743878   44895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:22:09.745735   44895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:22:09.746217   44895 out.go:352] Setting JSON to false
	I1204 23:22:09.747867   44895 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3880,"bootTime":1733350650,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1204 23:22:09.747967   44895 start.go:139] virtualization:  
	I1204 23:22:09.750220   44895 out.go:177] * [functional-876483] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1204 23:22:09.751797   44895 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:22:09.753037   44895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:22:09.753302   44895 notify.go:220] Checking for updates...
	I1204 23:22:09.757563   44895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:22:09.758996   44895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1204 23:22:09.760376   44895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1204 23:22:09.762616   44895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:22:09.766684   44895 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:22:09.767279   44895 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:22:09.800712   44895 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:22:09.800838   44895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:22:09.855901   44895 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-04 23:22:09.847145165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:22:09.856074   44895 docker.go:318] overlay module found
	I1204 23:22:09.857989   44895 out.go:177] * Using the docker driver based on existing profile
	I1204 23:22:09.860343   44895 start.go:297] selected driver: docker
	I1204 23:22:09.860598   44895 start.go:901] validating driver "docker" against &{Name:functional-876483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-876483 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:22:09.860752   44895 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:22:09.863323   44895 out.go:201] 
	W1204 23:22:09.864885   44895 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 23:22:09.866420   44895 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-876483 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-876483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-876483 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (255.766865ms)

                                                
                                                
-- stdout --
	* [functional-876483] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:22:11.537684   45450 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:22:11.545406   45450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:22:11.545420   45450 out.go:358] Setting ErrFile to fd 2...
	I1204 23:22:11.545427   45450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:22:11.546311   45450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:22:11.546743   45450 out.go:352] Setting JSON to false
	I1204 23:22:11.547671   45450 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3882,"bootTime":1733350650,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1204 23:22:11.547758   45450 start.go:139] virtualization:  
	I1204 23:22:11.549945   45450 out.go:177] * [functional-876483] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1204 23:22:11.552316   45450 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:22:11.552905   45450 notify.go:220] Checking for updates...
	I1204 23:22:11.555409   45450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:22:11.557335   45450 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:22:11.559495   45450 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1204 23:22:11.562065   45450 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1204 23:22:11.564179   45450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:22:11.567358   45450 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:22:11.568188   45450 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:22:11.597271   45450 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:22:11.597380   45450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:22:11.693580   45450 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-04 23:22:11.683968534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:22:11.693690   45450 docker.go:318] overlay module found
	I1204 23:22:11.695559   45450 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1204 23:22:11.696988   45450 start.go:297] selected driver: docker
	I1204 23:22:11.697005   45450 start.go:901] validating driver "docker" against &{Name:functional-876483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-876483 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:22:11.697192   45450 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:22:11.699602   45450 out.go:201] 
	W1204 23:22:11.700935   45450 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 23:22:11.702400   45450 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-876483 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-876483 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-7nllh" [70a264c1-953f-464b-9edc-4656e2e11bc4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-7nllh" [70a264c1-953f-464b-9edc-4656e2e11bc4] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007018437s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30120
functional_test.go:1675: http://192.168.49.2:30120: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-7nllh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30120
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8d4a7967-9cab-4880-868f-d73594cd4b28] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003590343s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-876483 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-876483 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-876483 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-876483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb28253e-e928-42f8-a047-499713f88c7a] Pending
helpers_test.go:344: "sp-pod" [bb28253e-e928-42f8-a047-499713f88c7a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb28253e-e928-42f8-a047-499713f88c7a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004250311s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-876483 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-876483 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-876483 delete -f testdata/storage-provisioner/pod.yaml: (1.441944559s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-876483 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a4784431-b5b2-4f5a-852a-f890537fed55] Pending
helpers_test.go:344: "sp-pod" [a4784431-b5b2-4f5a-852a-f890537fed55] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003320188s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-876483 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh -n functional-876483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cp functional-876483:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1273510803/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh -n functional-876483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh -n functional-876483 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7736/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /etc/test/nested/copy/7736/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7736.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /etc/ssl/certs/7736.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7736.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /usr/share/ca-certificates/7736.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/77362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /etc/ssl/certs/77362.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/77362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /usr/share/ca-certificates/77362.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-876483 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 ssh "sudo systemctl is-active docker": exit status 1 (444.626092ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 ssh "sudo systemctl is-active crio": exit status 1 (412.787632ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-876483 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-876483 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-876483 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-876483 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 40434: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 version -o=json --components: (1.336200668s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-876483 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-876483 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [00fa8a74-ca50-4490-9e32-1b0ee0cac4b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [00fa8a74-ca50-4490-9e32-1b0ee0cac4b7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.005445461s
I1204 23:21:39.302477    7736 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-876483 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-876483
docker.io/kindest/kindnetd:v20241023-a345ebe4
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-876483
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-876483 image ls --format short --alsologtostderr:
I1204 23:22:13.206638   45756 out.go:345] Setting OutFile to fd 1 ...
I1204 23:22:13.206948   45756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:13.206979   45756 out.go:358] Setting ErrFile to fd 2...
I1204 23:22:13.207008   45756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:13.207327   45756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1204 23:22:13.208207   45756 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:13.208416   45756 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:13.209031   45756 cli_runner.go:164] Run: docker container inspect functional-876483 --format={{.State.Status}}
I1204 23:22:13.230068   45756 ssh_runner.go:195] Run: systemctl --version
I1204 23:22:13.230137   45756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876483
I1204 23:22:13.255057   45756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/functional-876483/id_rsa Username:docker}
I1204 23:22:13.367918   45756 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-876483 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:f9c264 | 25.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:d6b061 | 18.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:021d24 | 26.8MB |
| docker.io/kicbase/echo-server               | functional-876483  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-876483  | sha256:e9d604 | 992B   |
| docker.io/library/nginx                     | alpine             | sha256:dba92e | 24.3MB |
| docker.io/library/nginx                     | latest             | sha256:bdf62f | 68.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20241023-a345ebe4 | sha256:55b97e | 35.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:9404ae | 23.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-876483 image ls --format table --alsologtostderr:
I1204 23:22:14.154077   45993 out.go:345] Setting OutFile to fd 1 ...
I1204 23:22:14.154292   45993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:14.154321   45993 out.go:358] Setting ErrFile to fd 2...
I1204 23:22:14.154341   45993 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:14.155194   45993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1204 23:22:14.156666   45993 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:14.156802   45993 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:14.158966   45993 cli_runner.go:164] Run: docker container inspect functional-876483 --format={{.State.Status}}
I1204 23:22:14.175778   45993 ssh_runner.go:195] Run: systemctl --version
I1204 23:22:14.175832   45993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876483
I1204 23:22:14.195213   45993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/functional-876483/id_rsa Username:docker}
I1204 23:22:14.281872   45993 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-876483 image ls --format json --alsologtostderr:
[{"id":"sha256:dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"24250568"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-876483"],"size":"2173567"},{"id":"sha256:55b97e1cbb2a39e125fd41804d8dd0279b34111fe79fd4673ddc92bc97431ca2","repoDigests":["docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16"],"repoTags":["docker.io/kindest/kindnetd:v20241023-a345ebe4"],"size":"35319207"},{"id":"sha256:ba04bb24b95
753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"23872272"},{"id":"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"26768683"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["
registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:e9d6041e94e901eb7c8d7e7819c34f0f514309b8d63951573c94f615b3cb132e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-876483"],"size":"992"},{"id":"sha256:bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":["docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"68524740"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id
":"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"18429679"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a59
6ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"25612805"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-876483 image ls --format json --alsologtostderr:
I1204 23:22:13.912100   45949 out.go:345] Setting OutFile to fd 1 ...
I1204 23:22:13.912282   45949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:13.912293   45949 out.go:358] Setting ErrFile to fd 2...
I1204 23:22:13.912298   45949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:13.912540   45949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1204 23:22:13.913228   45949 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:13.913361   45949 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:13.913891   45949 cli_runner.go:164] Run: docker container inspect functional-876483 --format={{.State.Status}}
I1204 23:22:13.934325   45949 ssh_runner.go:195] Run: systemctl --version
I1204 23:22:13.934385   45949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876483
I1204 23:22:13.952592   45949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/functional-876483/id_rsa Username:docker}
I1204 23:22:14.041680   45949 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-876483 image ls --format yaml --alsologtostderr:
- id: sha256:55b97e1cbb2a39e125fd41804d8dd0279b34111fe79fd4673ddc92bc97431ca2
repoDigests:
- docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16
repoTags:
- docker.io/kindest/kindnetd:v20241023-a345ebe4
size: "35319207"
- id: sha256:dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
repoTags:
- docker.io/library/nginx:alpine
size: "24250568"
- id: sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "25612805"
- id: sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "23872272"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:e9d6041e94e901eb7c8d7e7819c34f0f514309b8d63951573c94f615b3cb132e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-876483
size: "992"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "18429679"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-876483
size: "2173567"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests:
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "68524740"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "26768683"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-876483 image ls --format yaml --alsologtostderr:
I1204 23:22:13.585758   45792 out.go:345] Setting OutFile to fd 1 ...
I1204 23:22:13.585869   45792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:13.585875   45792 out.go:358] Setting ErrFile to fd 2...
I1204 23:22:13.585879   45792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:13.586154   45792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1204 23:22:13.586835   45792 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:13.586956   45792 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:13.587467   45792 cli_runner.go:164] Run: docker container inspect functional-876483 --format={{.State.Status}}
I1204 23:22:13.604764   45792 ssh_runner.go:195] Run: systemctl --version
I1204 23:22:13.604832   45792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876483
I1204 23:22:13.642929   45792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/functional-876483/id_rsa Username:docker}
I1204 23:22:13.750221   45792 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 ssh pgrep buildkitd: exit status 1 (285.587748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image build -t localhost/my-image:functional-876483 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 image build -t localhost/my-image:functional-876483 testdata/build --alsologtostderr: (3.382231518s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-876483 image build -t localhost/my-image:functional-876483 testdata/build --alsologtostderr:
I1204 23:22:14.684970   46082 out.go:345] Setting OutFile to fd 1 ...
I1204 23:22:14.685253   46082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:14.685285   46082 out.go:358] Setting ErrFile to fd 2...
I1204 23:22:14.685306   46082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:22:14.685611   46082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1204 23:22:14.686341   46082 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:14.687955   46082 config.go:182] Loaded profile config "functional-876483": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1204 23:22:14.688527   46082 cli_runner.go:164] Run: docker container inspect functional-876483 --format={{.State.Status}}
I1204 23:22:14.708120   46082 ssh_runner.go:195] Run: systemctl --version
I1204 23:22:14.708172   46082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-876483
I1204 23:22:14.726452   46082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/functional-876483/id_rsa Username:docker}
I1204 23:22:14.813790   46082 build_images.go:161] Building image from path: /tmp/build.441088739.tar
I1204 23:22:14.813912   46082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1204 23:22:14.823775   46082 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.441088739.tar
I1204 23:22:14.833764   46082 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.441088739.tar: stat -c "%s %y" /var/lib/minikube/build/build.441088739.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.441088739.tar': No such file or directory
I1204 23:22:14.833808   46082 ssh_runner.go:362] scp /tmp/build.441088739.tar --> /var/lib/minikube/build/build.441088739.tar (3072 bytes)
I1204 23:22:14.861165   46082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.441088739
I1204 23:22:14.871507   46082 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.441088739 -xf /var/lib/minikube/build/build.441088739.tar
I1204 23:22:14.881551   46082 containerd.go:394] Building image: /var/lib/minikube/build/build.441088739
I1204 23:22:14.881682   46082 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.441088739 --local dockerfile=/var/lib/minikube/build/build.441088739 --output type=image,name=localhost/my-image:functional-876483
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:cf4016bd7475b43bc037b637f342daadf1bf475adab4a9bb02892de25e3e1820
#8 exporting manifest sha256:cf4016bd7475b43bc037b637f342daadf1bf475adab4a9bb02892de25e3e1820 done
#8 exporting config sha256:bc8dc89496e85180c471ea4014d6471fecfc0ba69f2264292dc531de84a7af15 done
#8 naming to localhost/my-image:functional-876483 done
#8 DONE 0.1s
I1204 23:22:17.971531   46082 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.441088739 --local dockerfile=/var/lib/minikube/build/build.441088739 --output type=image,name=localhost/my-image:functional-876483: (3.089804758s)
I1204 23:22:17.971606   46082 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.441088739
I1204 23:22:17.983398   46082 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.441088739.tar
I1204 23:22:17.992604   46082 build_images.go:217] Built localhost/my-image:functional-876483 from /tmp/build.441088739.tar
I1204 23:22:17.992632   46082 build_images.go:133] succeeded building to: functional-876483
I1204 23:22:17.992637   46082 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-876483
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image load --daemon kicbase/echo-server:functional-876483 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 image load --daemon kicbase/echo-server:functional-876483 --alsologtostderr: (1.161300158s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image load --daemon kicbase/echo-server:functional-876483 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-876483
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image load --daemon kicbase/echo-server:functional-876483 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image save kicbase/echo-server:functional-876483 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image rm kicbase/echo-server:functional-876483 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-876483
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 image save --daemon kicbase/echo-server:functional-876483 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-876483
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 update-context --alsologtostderr -v=2
2024/12/04 23:22:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-876483 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.114.3 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-876483 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdany-port2005829097/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733354500357584103" to /tmp/TestFunctionalparallelMountCmdany-port2005829097/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733354500357584103" to /tmp/TestFunctionalparallelMountCmdany-port2005829097/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733354500357584103" to /tmp/TestFunctionalparallelMountCmdany-port2005829097/001/test-1733354500357584103
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  4 23:21 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  4 23:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  4 23:21 test-1733354500357584103
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh cat /mount-9p/test-1733354500357584103
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-876483 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9fe085b3-b3bf-4a14-8bed-89f867c6c8e5] Pending
helpers_test.go:344: "busybox-mount" [9fe085b3-b3bf-4a14-8bed-89f867c6c8e5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9fe085b3-b3bf-4a14-8bed-89f867c6c8e5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9fe085b3-b3bf-4a14-8bed-89f867c6c8e5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00315917s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-876483 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdany-port2005829097/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdspecific-port520824856/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (422.865348ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:21:48.134255    7736 retry.go:31] will retry after 548.068871ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdspecific-port520824856/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-876483 ssh "sudo umount -f /mount-9p": exit status 1 (403.141416ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-876483 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdspecific-port520824856/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333491188/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333491188/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333491188/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T" /mount1: (1.042119174s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-876483 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333491188/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333491188/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-876483 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1333491188/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-876483 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-876483 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-5kxt7" [45ceed9b-85da-40ee-ab55-a25670d3b2f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-5kxt7" [45ceed9b-85da-40ee-ab55-a25670d3b2f6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004669678s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "397.674641ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "90.054065ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "409.683621ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "61.377362ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 service list -o json
functional_test.go:1494: Took "611.781193ms" to run "out/minikube-linux-arm64 -p functional-876483 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30137
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-876483 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30137
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-876483
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-876483
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-876483
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-529752 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1204 23:22:29.228686    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-529752 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m53.24569411s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (114.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- rollout status deployment/busybox
E1204 23:24:45.366732    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-529752 -- rollout status deployment/busybox: (42.73541276s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-65rfg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-j95k4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-xp2ln -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-65rfg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-j95k4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-xp2ln -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-65rfg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-j95k4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-xp2ln -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-65rfg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-65rfg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-j95k4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-j95k4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-xp2ln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-529752 -- exec busybox-7dff88458-xp2ln -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-529752 -v=7 --alsologtostderr
E1204 23:25:13.071650    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-529752 -v=7 --alsologtostderr: (20.364530142s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr: (1.002443188s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-529752 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.000190423s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp testdata/cp-test.txt ha-529752:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1419734303/001/cp-test_ha-529752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752:/home/docker/cp-test.txt ha-529752-m02:/home/docker/cp-test_ha-529752_ha-529752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test_ha-529752_ha-529752-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752:/home/docker/cp-test.txt ha-529752-m03:/home/docker/cp-test_ha-529752_ha-529752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test_ha-529752_ha-529752-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752:/home/docker/cp-test.txt ha-529752-m04:/home/docker/cp-test_ha-529752_ha-529752-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test_ha-529752_ha-529752-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp testdata/cp-test.txt ha-529752-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1419734303/001/cp-test_ha-529752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m02:/home/docker/cp-test.txt ha-529752:/home/docker/cp-test_ha-529752-m02_ha-529752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test_ha-529752-m02_ha-529752.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m02:/home/docker/cp-test.txt ha-529752-m03:/home/docker/cp-test_ha-529752-m02_ha-529752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test_ha-529752-m02_ha-529752-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m02:/home/docker/cp-test.txt ha-529752-m04:/home/docker/cp-test_ha-529752-m02_ha-529752-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test_ha-529752-m02_ha-529752-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp testdata/cp-test.txt ha-529752-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1419734303/001/cp-test_ha-529752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m03:/home/docker/cp-test.txt ha-529752:/home/docker/cp-test_ha-529752-m03_ha-529752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test_ha-529752-m03_ha-529752.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m03:/home/docker/cp-test.txt ha-529752-m02:/home/docker/cp-test_ha-529752-m03_ha-529752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test_ha-529752-m03_ha-529752-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m03:/home/docker/cp-test.txt ha-529752-m04:/home/docker/cp-test_ha-529752-m03_ha-529752-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test_ha-529752-m03_ha-529752-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp testdata/cp-test.txt ha-529752-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1419734303/001/cp-test_ha-529752-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m04:/home/docker/cp-test.txt ha-529752:/home/docker/cp-test_ha-529752-m04_ha-529752.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752 "sudo cat /home/docker/cp-test_ha-529752-m04_ha-529752.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m04:/home/docker/cp-test.txt ha-529752-m02:/home/docker/cp-test_ha-529752-m04_ha-529752-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m02 "sudo cat /home/docker/cp-test_ha-529752-m04_ha-529752-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 cp ha-529752-m04:/home/docker/cp-test.txt ha-529752-m03:/home/docker/cp-test_ha-529752-m04_ha-529752-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 ssh -n ha-529752-m03 "sudo cat /home/docker/cp-test_ha-529752-m04_ha-529752-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-529752 node stop m02 -v=7 --alsologtostderr: (12.044284733s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr: exit status 7 (757.434907ms)

                                                
                                                
-- stdout --
	ha-529752
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-529752-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-529752-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-529752-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:25:58.035940   62588 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:25:58.036107   62588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:58.036135   62588 out.go:358] Setting ErrFile to fd 2...
	I1204 23:25:58.036140   62588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:58.036433   62588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:25:58.036727   62588 out.go:352] Setting JSON to false
	I1204 23:25:58.036771   62588 mustload.go:65] Loading cluster: ha-529752
	I1204 23:25:58.036829   62588 notify.go:220] Checking for updates...
	I1204 23:25:58.037395   62588 config.go:182] Loaded profile config "ha-529752": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:25:58.037417   62588 status.go:174] checking status of ha-529752 ...
	I1204 23:25:58.038170   62588 cli_runner.go:164] Run: docker container inspect ha-529752 --format={{.State.Status}}
	I1204 23:25:58.058613   62588 status.go:371] ha-529752 host status = "Running" (err=<nil>)
	I1204 23:25:58.058641   62588 host.go:66] Checking if "ha-529752" exists ...
	I1204 23:25:58.058952   62588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-529752
	I1204 23:25:58.087063   62588 host.go:66] Checking if "ha-529752" exists ...
	I1204 23:25:58.087514   62588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:25:58.087574   62588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-529752
	I1204 23:25:58.110452   62588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/ha-529752/id_rsa Username:docker}
	I1204 23:25:58.198414   62588 ssh_runner.go:195] Run: systemctl --version
	I1204 23:25:58.203078   62588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:25:58.215358   62588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:58.293488   62588 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-04 23:25:58.27971107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-nf
-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:25:58.295079   62588 kubeconfig.go:125] found "ha-529752" server: "https://192.168.49.254:8443"
	I1204 23:25:58.295123   62588 api_server.go:166] Checking apiserver status ...
	I1204 23:25:58.295176   62588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:25:58.307545   62588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1505/cgroup
	I1204 23:25:58.317662   62588 api_server.go:182] apiserver freezer: "11:freezer:/docker/912ad658abe5363ec8cc9515573d5af46489f7d945bc79233a489726dd51e543/kubepods/burstable/pod00464bb35e4ac8d871e459e9dd15471e/9243d9a44d24a050025d66f6d46a2ab1cefeb5c2c4132e1fcb3e33b7dc34a5f7"
	I1204 23:25:58.317732   62588 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/912ad658abe5363ec8cc9515573d5af46489f7d945bc79233a489726dd51e543/kubepods/burstable/pod00464bb35e4ac8d871e459e9dd15471e/9243d9a44d24a050025d66f6d46a2ab1cefeb5c2c4132e1fcb3e33b7dc34a5f7/freezer.state
	I1204 23:25:58.329073   62588 api_server.go:204] freezer state: "THAWED"
	I1204 23:25:58.329142   62588 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1204 23:25:58.339100   62588 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1204 23:25:58.339129   62588 status.go:463] ha-529752 apiserver status = Running (err=<nil>)
	I1204 23:25:58.339140   62588 status.go:176] ha-529752 status: &{Name:ha-529752 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:25:58.339166   62588 status.go:174] checking status of ha-529752-m02 ...
	I1204 23:25:58.339505   62588 cli_runner.go:164] Run: docker container inspect ha-529752-m02 --format={{.State.Status}}
	I1204 23:25:58.358217   62588 status.go:371] ha-529752-m02 host status = "Stopped" (err=<nil>)
	I1204 23:25:58.358236   62588 status.go:384] host is not running, skipping remaining checks
	I1204 23:25:58.358243   62588 status.go:176] ha-529752-m02 status: &{Name:ha-529752-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:25:58.358267   62588 status.go:174] checking status of ha-529752-m03 ...
	I1204 23:25:58.358593   62588 cli_runner.go:164] Run: docker container inspect ha-529752-m03 --format={{.State.Status}}
	I1204 23:25:58.380915   62588 status.go:371] ha-529752-m03 host status = "Running" (err=<nil>)
	I1204 23:25:58.380955   62588 host.go:66] Checking if "ha-529752-m03" exists ...
	I1204 23:25:58.381582   62588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-529752-m03
	I1204 23:25:58.398938   62588 host.go:66] Checking if "ha-529752-m03" exists ...
	I1204 23:25:58.399404   62588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:25:58.399453   62588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-529752-m03
	I1204 23:25:58.417922   62588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/ha-529752-m03/id_rsa Username:docker}
	I1204 23:25:58.506633   62588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:25:58.519279   62588 kubeconfig.go:125] found "ha-529752" server: "https://192.168.49.254:8443"
	I1204 23:25:58.519311   62588 api_server.go:166] Checking apiserver status ...
	I1204 23:25:58.519352   62588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:25:58.530062   62588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup
	I1204 23:25:58.539619   62588 api_server.go:182] apiserver freezer: "11:freezer:/docker/d0be536905ccd6deb6cc6d6cb620816bed7f3bb1713b9af4c3dd65403c9294aa/kubepods/burstable/podd0642340b18fd26fb8414bbe36111cd0/1fe31be90a9f17c086644f8afe36f5a607d7933632f0c71f7194dea284c8d3b5"
	I1204 23:25:58.539689   62588 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d0be536905ccd6deb6cc6d6cb620816bed7f3bb1713b9af4c3dd65403c9294aa/kubepods/burstable/podd0642340b18fd26fb8414bbe36111cd0/1fe31be90a9f17c086644f8afe36f5a607d7933632f0c71f7194dea284c8d3b5/freezer.state
	I1204 23:25:58.549337   62588 api_server.go:204] freezer state: "THAWED"
	I1204 23:25:58.549372   62588 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1204 23:25:58.557231   62588 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1204 23:25:58.557259   62588 status.go:463] ha-529752-m03 apiserver status = Running (err=<nil>)
	I1204 23:25:58.557268   62588 status.go:176] ha-529752-m03 status: &{Name:ha-529752-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:25:58.557285   62588 status.go:174] checking status of ha-529752-m04 ...
	I1204 23:25:58.557590   62588 cli_runner.go:164] Run: docker container inspect ha-529752-m04 --format={{.State.Status}}
	I1204 23:25:58.575155   62588 status.go:371] ha-529752-m04 host status = "Running" (err=<nil>)
	I1204 23:25:58.575193   62588 host.go:66] Checking if "ha-529752-m04" exists ...
	I1204 23:25:58.575471   62588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-529752-m04
	I1204 23:25:58.599460   62588 host.go:66] Checking if "ha-529752-m04" exists ...
	I1204 23:25:58.600639   62588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:25:58.600984   62588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-529752-m04
	I1204 23:25:58.619129   62588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/ha-529752-m04/id_rsa Username:docker}
	I1204 23:25:58.706177   62588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:25:58.718258   62588 status.go:176] ha-529752-m04 status: &{Name:ha-529752-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (17.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-529752 node start m02 -v=7 --alsologtostderr: (16.89466738s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (17.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-529752 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-529752 -v=7 --alsologtostderr
E1204 23:26:30.816411    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:30.822854    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:30.834287    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:30.855634    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:30.896996    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:30.978399    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:31.139867    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:31.461483    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:32.102860    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:33.384278    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:35.945807    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:41.067858    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:26:51.309878    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-529752 -v=7 --alsologtostderr: (37.382733879s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-529752 --wait=true -v=7 --alsologtostderr
E1204 23:27:11.791725    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:27:52.753746    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-529752 --wait=true -v=7 --alsologtostderr: (1m39.047307709s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-529752
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-529752 node delete m03 -v=7 --alsologtostderr: (10.288577061s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 stop -v=7 --alsologtostderr
E1204 23:29:14.675140    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-529752 stop -v=7 --alsologtostderr: (35.905954921s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr: exit status 7 (118.05899ms)

                                                
                                                
-- stdout --
	ha-529752
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-529752-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-529752-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:29:22.862501   76996 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:29:22.862704   76996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:29:22.862729   76996 out.go:358] Setting ErrFile to fd 2...
	I1204 23:29:22.862750   76996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:29:22.863511   76996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:29:22.863773   76996 out.go:352] Setting JSON to false
	I1204 23:29:22.863843   76996 mustload.go:65] Loading cluster: ha-529752
	I1204 23:29:22.863938   76996 notify.go:220] Checking for updates...
	I1204 23:29:22.864933   76996 config.go:182] Loaded profile config "ha-529752": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:29:22.864979   76996 status.go:174] checking status of ha-529752 ...
	I1204 23:29:22.865630   76996 cli_runner.go:164] Run: docker container inspect ha-529752 --format={{.State.Status}}
	I1204 23:29:22.882026   76996 status.go:371] ha-529752 host status = "Stopped" (err=<nil>)
	I1204 23:29:22.882050   76996 status.go:384] host is not running, skipping remaining checks
	I1204 23:29:22.882057   76996 status.go:176] ha-529752 status: &{Name:ha-529752 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:29:22.882087   76996 status.go:174] checking status of ha-529752-m02 ...
	I1204 23:29:22.882381   76996 cli_runner.go:164] Run: docker container inspect ha-529752-m02 --format={{.State.Status}}
	I1204 23:29:22.912224   76996 status.go:371] ha-529752-m02 host status = "Stopped" (err=<nil>)
	I1204 23:29:22.912245   76996 status.go:384] host is not running, skipping remaining checks
	I1204 23:29:22.912251   76996 status.go:176] ha-529752-m02 status: &{Name:ha-529752-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:29:22.912269   76996 status.go:174] checking status of ha-529752-m04 ...
	I1204 23:29:22.912574   76996 cli_runner.go:164] Run: docker container inspect ha-529752-m04 --format={{.State.Status}}
	I1204 23:29:22.928789   76996 status.go:371] ha-529752-m04 host status = "Stopped" (err=<nil>)
	I1204 23:29:22.928828   76996 status.go:384] host is not running, skipping remaining checks
	I1204 23:29:22.928836   76996 status.go:176] ha-529752-m04 status: &{Name:ha-529752-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-529752 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1204 23:29:45.366061    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-529752 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.774079504s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-529752 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-529752 --control-plane -v=7 --alsologtostderr: (42.637449342s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-529752 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-186030 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1204 23:31:58.517337    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-186030 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m0.190873367s)
--- PASS: TestJSONOutput/start/Command (60.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-186030 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-186030 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-186030 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-186030 --output=json --user=testUser: (5.745602065s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-758681 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-758681 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.906418ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cbf9c2d5-7992-4e62-966f-dd0924758476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-758681] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a0d58cd-cc64-4784-9a86-dab3ea011232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"978ff8d0-8077-423d-919f-1bd764d0bf84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e90f12ed-f83d-4cfa-9d5e-7765d7c7e283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig"}}
	{"specversion":"1.0","id":"1247506b-a1ed-4cde-aa73-a4de90172885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube"}}
	{"specversion":"1.0","id":"8fc5ec4d-b10d-414e-9ca9-d6a6b245d061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"086f51bf-f3d9-428d-a252-760ed06e8c41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d30a594d-df78-4755-be58-dba942a150f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-758681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-758681
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-546018 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-546018 --network=: (37.053183712s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-546018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-546018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-546018: (2.039485158s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-181139 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-181139 --network=bridge: (30.463289224s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-181139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-181139
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-181139: (1.931970372s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.42s)

                                                
                                    
x
+
TestKicExistingNetwork (34.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1204 23:33:59.228709    7736 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1204 23:33:59.243928    7736 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1204 23:33:59.244031    7736 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1204 23:33:59.244048    7736 cli_runner.go:164] Run: docker network inspect existing-network
W1204 23:33:59.259672    7736 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1204 23:33:59.259700    7736 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1204 23:33:59.259713    7736 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1204 23:33:59.259812    7736 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1204 23:33:59.276454    7736 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e671b4fd53b8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:f6:18:6f} reservation:<nil>}
I1204 23:33:59.276780    7736 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001db8170}
I1204 23:33:59.276813    7736 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1204 23:33:59.276862    7736 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1204 23:33:59.342992    7736 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-012531 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-012531 --network=existing-network: (32.007210243s)
helpers_test.go:175: Cleaning up "existing-network-012531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-012531
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-012531: (1.976200353s)
I1204 23:34:33.341613    7736 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.14s)

                                                
                                    
x
+
TestKicCustomSubnet (33.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-160961 --subnet=192.168.60.0/24
E1204 23:34:45.366613    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-160961 --subnet=192.168.60.0/24: (31.30674563s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-160961 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-160961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-160961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-160961: (2.123980459s)
--- PASS: TestKicCustomSubnet (33.46s)

                                                
                                    
x
+
TestKicStaticIP (33.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-660169 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-660169 --static-ip=192.168.200.200: (31.500879687s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-660169 ip
helpers_test.go:175: Cleaning up "static-ip-660169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-660169
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-660169: (2.133834309s)
--- PASS: TestKicStaticIP (33.78s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (64.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-738912 --driver=docker  --container-runtime=containerd
E1204 23:36:08.433269    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-738912 --driver=docker  --container-runtime=containerd: (28.738282147s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-742473 --driver=docker  --container-runtime=containerd
E1204 23:36:30.817231    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-742473 --driver=docker  --container-runtime=containerd: (30.252333326s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-738912
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-742473
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-742473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-742473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-742473: (1.953020474s)
helpers_test.go:175: Cleaning up "first-738912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-738912
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-738912: (2.310838301s)
--- PASS: TestMinikubeProfile (64.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-154470 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-154470 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.109881762s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-154470 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-156645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-156645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.683479198s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-156645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-154470 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-154470 --alsologtostderr -v=5: (1.624240394s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-156645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-156645
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-156645: (1.201978708s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-156645
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-156645: (6.180713799s)
--- PASS: TestMountStart/serial/RestartStopped (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-156645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-672710 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-672710 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.810572764s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-672710 -- rollout status deployment/busybox: (17.118814309s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-7mh5b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-bbj6m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-7mh5b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-bbj6m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-7mh5b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-bbj6m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-7mh5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-7mh5b -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-bbj6m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-672710 -- exec busybox-7dff88458-bbj6m -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-672710 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-672710 -v 3 --alsologtostderr: (19.271540952s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-672710 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp testdata/cp-test.txt multinode-672710:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1684722937/001/cp-test_multinode-672710.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710:/home/docker/cp-test.txt multinode-672710-m02:/home/docker/cp-test_multinode-672710_multinode-672710-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m02 "sudo cat /home/docker/cp-test_multinode-672710_multinode-672710-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710:/home/docker/cp-test.txt multinode-672710-m03:/home/docker/cp-test_multinode-672710_multinode-672710-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m03 "sudo cat /home/docker/cp-test_multinode-672710_multinode-672710-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp testdata/cp-test.txt multinode-672710-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1684722937/001/cp-test_multinode-672710-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710-m02:/home/docker/cp-test.txt multinode-672710:/home/docker/cp-test_multinode-672710-m02_multinode-672710.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710 "sudo cat /home/docker/cp-test_multinode-672710-m02_multinode-672710.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710-m02:/home/docker/cp-test.txt multinode-672710-m03:/home/docker/cp-test_multinode-672710-m02_multinode-672710-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m03 "sudo cat /home/docker/cp-test_multinode-672710-m02_multinode-672710-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp testdata/cp-test.txt multinode-672710-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1684722937/001/cp-test_multinode-672710-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710-m03:/home/docker/cp-test.txt multinode-672710:/home/docker/cp-test_multinode-672710-m03_multinode-672710.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710 "sudo cat /home/docker/cp-test_multinode-672710-m03_multinode-672710.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 cp multinode-672710-m03:/home/docker/cp-test.txt multinode-672710-m02:/home/docker/cp-test_multinode-672710-m03_multinode-672710-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 ssh -n multinode-672710-m02 "sudo cat /home/docker/cp-test_multinode-672710-m03_multinode-672710-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-672710 node stop m03: (1.228841776s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-672710 status: exit status 7 (522.68682ms)

                                                
                                                
-- stdout --
	multinode-672710
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-672710-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-672710-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr: exit status 7 (517.722921ms)

                                                
                                                
-- stdout --
	multinode-672710
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-672710-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-672710-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:39:20.852091  130672 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:39:20.852429  130672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:39:20.852461  130672 out.go:358] Setting ErrFile to fd 2...
	I1204 23:39:20.852483  130672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:39:20.852749  130672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:39:20.852978  130672 out.go:352] Setting JSON to false
	I1204 23:39:20.853049  130672 mustload.go:65] Loading cluster: multinode-672710
	I1204 23:39:20.853157  130672 notify.go:220] Checking for updates...
	I1204 23:39:20.853586  130672 config.go:182] Loaded profile config "multinode-672710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:39:20.853629  130672 status.go:174] checking status of multinode-672710 ...
	I1204 23:39:20.854504  130672 cli_runner.go:164] Run: docker container inspect multinode-672710 --format={{.State.Status}}
	I1204 23:39:20.875618  130672 status.go:371] multinode-672710 host status = "Running" (err=<nil>)
	I1204 23:39:20.875641  130672 host.go:66] Checking if "multinode-672710" exists ...
	I1204 23:39:20.875955  130672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-672710
	I1204 23:39:20.901239  130672 host.go:66] Checking if "multinode-672710" exists ...
	I1204 23:39:20.901546  130672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:39:20.901590  130672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-672710
	I1204 23:39:20.919692  130672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/multinode-672710/id_rsa Username:docker}
	I1204 23:39:21.006048  130672 ssh_runner.go:195] Run: systemctl --version
	I1204 23:39:21.011158  130672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:39:21.023607  130672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:39:21.078924  130672 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-04 23:39:21.069667076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:39:21.079521  130672 kubeconfig.go:125] found "multinode-672710" server: "https://192.168.67.2:8443"
	I1204 23:39:21.079555  130672 api_server.go:166] Checking apiserver status ...
	I1204 23:39:21.079604  130672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:39:21.092599  130672 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1390/cgroup
	I1204 23:39:21.102275  130672 api_server.go:182] apiserver freezer: "11:freezer:/docker/384f5c2eb66d2f3247e035ecf5e6d4d16e1130f0d2b4e756514151b13f21ffa7/kubepods/burstable/pod6755d15bde7adb1d9d8216066543b288/742e24869e6f80216b84b17994a3191eb4152a30eab9b9478846a43d2863661b"
	I1204 23:39:21.102354  130672 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/384f5c2eb66d2f3247e035ecf5e6d4d16e1130f0d2b4e756514151b13f21ffa7/kubepods/burstable/pod6755d15bde7adb1d9d8216066543b288/742e24869e6f80216b84b17994a3191eb4152a30eab9b9478846a43d2863661b/freezer.state
	I1204 23:39:21.113499  130672 api_server.go:204] freezer state: "THAWED"
	I1204 23:39:21.113533  130672 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1204 23:39:21.121981  130672 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1204 23:39:21.122013  130672 status.go:463] multinode-672710 apiserver status = Running (err=<nil>)
	I1204 23:39:21.122025  130672 status.go:176] multinode-672710 status: &{Name:multinode-672710 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:39:21.122068  130672 status.go:174] checking status of multinode-672710-m02 ...
	I1204 23:39:21.122384  130672 cli_runner.go:164] Run: docker container inspect multinode-672710-m02 --format={{.State.Status}}
	I1204 23:39:21.139779  130672 status.go:371] multinode-672710-m02 host status = "Running" (err=<nil>)
	I1204 23:39:21.139803  130672 host.go:66] Checking if "multinode-672710-m02" exists ...
	I1204 23:39:21.140108  130672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-672710-m02
	I1204 23:39:21.157992  130672 host.go:66] Checking if "multinode-672710-m02" exists ...
	I1204 23:39:21.158381  130672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:39:21.158426  130672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-672710-m02
	I1204 23:39:21.176079  130672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/multinode-672710-m02/id_rsa Username:docker}
	I1204 23:39:21.268799  130672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:39:21.282069  130672 status.go:176] multinode-672710-m02 status: &{Name:multinode-672710-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:39:21.282105  130672 status.go:174] checking status of multinode-672710-m03 ...
	I1204 23:39:21.282399  130672 cli_runner.go:164] Run: docker container inspect multinode-672710-m03 --format={{.State.Status}}
	I1204 23:39:21.303337  130672 status.go:371] multinode-672710-m03 host status = "Stopped" (err=<nil>)
	I1204 23:39:21.303361  130672 status.go:384] host is not running, skipping remaining checks
	I1204 23:39:21.303368  130672 status.go:176] multinode-672710-m03 status: &{Name:multinode-672710-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-672710 node start m03 -v=7 --alsologtostderr: (8.970706308s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (90.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-672710
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-672710
E1204 23:39:45.366437    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-672710: (24.805055107s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-672710 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-672710 --wait=true -v=8 --alsologtostderr: (1m6.015233271s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-672710
--- PASS: TestMultiNode/serial/RestartKeepsNodes (90.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-672710 node delete m03: (4.913243555s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 stop
E1204 23:41:30.816521    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-672710 stop: (23.736991963s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-672710 status: exit status 7 (87.086945ms)

                                                
                                                
-- stdout --
	multinode-672710
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-672710-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr: exit status 7 (87.815792ms)

                                                
                                                
-- stdout --
	multinode-672710
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-672710-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:41:31.420658  139121 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:41:31.420838  139121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:41:31.420849  139121 out.go:358] Setting ErrFile to fd 2...
	I1204 23:41:31.420854  139121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:41:31.421084  139121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:41:31.421319  139121 out.go:352] Setting JSON to false
	I1204 23:41:31.421353  139121 mustload.go:65] Loading cluster: multinode-672710
	I1204 23:41:31.421475  139121 notify.go:220] Checking for updates...
	I1204 23:41:31.421772  139121 config.go:182] Loaded profile config "multinode-672710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:41:31.421792  139121 status.go:174] checking status of multinode-672710 ...
	I1204 23:41:31.422314  139121 cli_runner.go:164] Run: docker container inspect multinode-672710 --format={{.State.Status}}
	I1204 23:41:31.438618  139121 status.go:371] multinode-672710 host status = "Stopped" (err=<nil>)
	I1204 23:41:31.438638  139121 status.go:384] host is not running, skipping remaining checks
	I1204 23:41:31.438645  139121 status.go:176] multinode-672710 status: &{Name:multinode-672710 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:41:31.438674  139121 status.go:174] checking status of multinode-672710-m02 ...
	I1204 23:41:31.438983  139121 cli_runner.go:164] Run: docker container inspect multinode-672710-m02 --format={{.State.Status}}
	I1204 23:41:31.456051  139121 status.go:371] multinode-672710-m02 host status = "Stopped" (err=<nil>)
	I1204 23:41:31.456076  139121 status.go:384] host is not running, skipping remaining checks
	I1204 23:41:31.456084  139121 status.go:176] multinode-672710-m02 status: &{Name:multinode-672710-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-672710 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-672710 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.302305441s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-672710 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-672710
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-672710-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-672710-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.223858ms)

                                                
                                                
-- stdout --
	* [multinode-672710-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-672710-m02' is duplicated with machine name 'multinode-672710-m02' in profile 'multinode-672710'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-672710-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-672710-m03 --driver=docker  --container-runtime=containerd: (31.204475556s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-672710
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-672710: exit status 80 (294.103878ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-672710 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-672710-m03 already exists in multinode-672710-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-672710-m03
E1204 23:42:53.878813    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-672710-m03: (1.96251235s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.61s)

                                                
                                    
x
+
TestPreload (126.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-254359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-254359 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m28.679269642s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-254359 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-254359 image pull gcr.io/k8s-minikube/busybox: (1.916029661s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-254359
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-254359: (12.04500048s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-254359 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1204 23:44:45.366377    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-254359 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.597601912s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-254359 image list
helpers_test.go:175: Cleaning up "test-preload-254359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-254359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-254359: (2.489784547s)
--- PASS: TestPreload (126.19s)

                                                
                                    
x
+
TestScheduledStopUnix (104.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-422999 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-422999 --memory=2048 --driver=docker  --container-runtime=containerd: (28.678101937s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422999 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-422999 -n scheduled-stop-422999
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422999 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1204 23:45:33.376057    7736 retry.go:31] will retry after 129.708µs: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.376686    7736 retry.go:31] will retry after 136.966µs: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.381261    7736 retry.go:31] will retry after 304.876µs: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.382406    7736 retry.go:31] will retry after 390.108µs: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.383730    7736 retry.go:31] will retry after 565.774µs: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.385426    7736 retry.go:31] will retry after 1.008603ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.386821    7736 retry.go:31] will retry after 1.445983ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.389047    7736 retry.go:31] will retry after 2.310372ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.392339    7736 retry.go:31] will retry after 2.602948ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.395632    7736 retry.go:31] will retry after 3.505497ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.399868    7736 retry.go:31] will retry after 6.68606ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.407170    7736 retry.go:31] will retry after 4.796934ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.412998    7736 retry.go:31] will retry after 11.060811ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.424329    7736 retry.go:31] will retry after 16.664446ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.441556    7736 retry.go:31] will retry after 41.532446ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
I1204 23:45:33.483791    7736 retry.go:31] will retry after 40.981631ms: open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/scheduled-stop-422999/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422999 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-422999 -n scheduled-stop-422999
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-422999
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422999 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1204 23:46:30.817675    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-422999
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-422999: exit status 7 (68.591893ms)

                                                
                                                
-- stdout --
	scheduled-stop-422999
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-422999 -n scheduled-stop-422999
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-422999 -n scheduled-stop-422999: exit status 7 (75.641666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-422999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-422999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-422999: (4.743024849s)
--- PASS: TestScheduledStopUnix (104.99s)

                                                
                                    
x
+
TestInsufficientStorage (9.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-905101 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-905101 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.513271944s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90ca4388-8a2f-49ce-9028-4b475934c0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-905101] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67564474-a1e0-414d-aeff-6a17b2ac174f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"1e08de91-6d86-4d4c-985a-d0a29f371fc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fbf8e587-f8a5-4dd5-b2b0-4f6161444834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig"}}
	{"specversion":"1.0","id":"2535a74a-9658-46a0-98c0-a92c89300ded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube"}}
	{"specversion":"1.0","id":"437abf46-2fcb-4877-9883-2c9ca7491f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6b852715-fd44-4cbd-b108-b3569069901f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"75636949-927f-42b3-a8fe-f7cdcd9eeeef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f9b73edf-59fb-48a6-99e9-93b15b6fc796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d2393c27-b46a-4dff-95da-12feddd9e421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba694075-c1a9-4c5a-b549-4f6a97c0b989","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f8f2aeae-6a87-49ca-990b-c4a7d23b3777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-905101\" primary control-plane node in \"insufficient-storage-905101\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"66c80959-e9e7-494d-a812-81f8c311d89d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"70857288-3397-4fcc-b904-25ad81c5344f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"90c340b6-4f51-4c7d-a5af-bee287447354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-905101 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-905101 --output=json --layout=cluster: exit status 7 (283.005552ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-905101","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-905101","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 23:46:56.951009  157819 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-905101" does not appear in /home/jenkins/minikube-integration/20045-2283/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-905101 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-905101 --output=json --layout=cluster: exit status 7 (289.993124ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-905101","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-905101","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 23:46:57.242608  157881 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-905101" does not appear in /home/jenkins/minikube-integration/20045-2283/kubeconfig
	E1204 23:46:57.252887  157881 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/insufficient-storage-905101/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-905101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-905101
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-905101: (1.862205187s)
--- PASS: TestInsufficientStorage (9.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1250684304 start -p running-upgrade-888639 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1250684304 start -p running-upgrade-888639 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.288578815s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-888639 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1204 23:52:48.435532    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-888639 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.462583547s)
helpers_test.go:175: Cleaning up "running-upgrade-888639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-888639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-888639: (2.383071038s)
--- PASS: TestRunningBinaryUpgrade (85.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.899995337s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-135557
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-135557: (1.272898617s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-135557 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-135557 status --format={{.Host}}: exit status 7 (88.770383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1204 23:49:45.366286    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m40.783150099s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-135557 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (130.786818ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-135557] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-135557
	    minikube start -p kubernetes-upgrade-135557 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1355572 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-135557 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-135557 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.379235103s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-135557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-135557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-135557: (2.273069767s)
--- PASS: TestKubernetesUpgrade (350.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (173.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2927916541 start -p missing-upgrade-295312 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2927916541 start -p missing-upgrade-295312 --memory=2200 --driver=docker  --container-runtime=containerd: (1m34.012717485s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-295312
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-295312: (10.277299624s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-295312
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-295312 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-295312 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.734525653s)
helpers_test.go:175: Cleaning up "missing-upgrade-295312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-295312
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-295312: (2.291648635s)
--- PASS: TestMissingContainerUpgrade (173.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-628622 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-628622 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (93.691297ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-628622] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-628622 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-628622 --driver=docker  --container-runtime=containerd: (39.820630019s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-628622 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-628622 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-628622 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.402417861s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-628622 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-628622 status -o json: exit status 2 (417.654771ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-628622","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-628622
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-628622: (1.983351715s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-628622 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-628622 --no-kubernetes --driver=docker  --container-runtime=containerd: (11.960141037s)
--- PASS: TestNoKubernetes/serial/Start (11.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-628622 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-628622 "sudo systemctl is-active --quiet service kubelet": exit status 1 (250.384402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-628622
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-628622: (1.203793049s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-628622 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-628622 --driver=docker  --container-runtime=containerd: (6.835882382s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-628622 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-628622 "sudo systemctl is-active --quiet service kubelet": exit status 1 (384.643485ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4282912594 start -p stopped-upgrade-532855 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4282912594 start -p stopped-upgrade-532855 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.283784622s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4282912594 -p stopped-upgrade-532855 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4282912594 -p stopped-upgrade-532855 stop: (19.880063527s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-532855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1204 23:51:30.816969    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-532855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.002895879s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-532855
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestPause/serial/Start (71.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-146757 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-146757 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m11.14625711s)
--- PASS: TestPause/serial/Start (71.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-146757 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-146757 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.868112594s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.88s)

                                                
                                    
x
+
TestPause/serial/Pause (1.17s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-146757 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-146757 --alsologtostderr -v=5: (1.166900386s)
--- PASS: TestPause/serial/Pause (1.17s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-146757 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-146757 --output=json --layout=cluster: exit status 2 (497.042454ms)

                                                
                                                
-- stdout --
	{"Name":"pause-146757","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-146757","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.50s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-146757 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-146757 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-146757 --alsologtostderr -v=5: (1.092121558s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.94s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-146757 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-146757 --alsologtostderr -v=5: (2.942986764s)
--- PASS: TestPause/serial/DeletePaused (2.94s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.86s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-146757
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-146757: exit status 1 (26.888501ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-146757: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-147448 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-147448 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (234.175331ms)

                                                
                                                
-- stdout --
	* [false-147448] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:54:40.513029  199650 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:54:40.513330  199650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:54:40.513341  199650 out.go:358] Setting ErrFile to fd 2...
	I1204 23:54:40.513346  199650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:54:40.513620  199650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
	I1204 23:54:40.514113  199650 out.go:352] Setting JSON to false
	I1204 23:54:40.515085  199650 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5831,"bootTime":1733350650,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1204 23:54:40.515188  199650 start.go:139] virtualization:  
	I1204 23:54:40.520429  199650 out.go:177] * [false-147448] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1204 23:54:40.523262  199650 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:54:40.523338  199650 notify.go:220] Checking for updates...
	I1204 23:54:40.528960  199650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:54:40.531744  199650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
	I1204 23:54:40.534377  199650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
	I1204 23:54:40.537043  199650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1204 23:54:40.539646  199650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:54:40.542848  199650 config.go:182] Loaded profile config "force-systemd-env-932373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1204 23:54:40.542952  199650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:54:40.574412  199650 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:54:40.574580  199650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:54:40.657868  199650 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:57 SystemTime:2024-12-04 23:54:40.642854633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1204 23:54:40.658000  199650 docker.go:318] overlay module found
	I1204 23:54:40.660848  199650 out.go:177] * Using the docker driver based on user configuration
	I1204 23:54:40.663390  199650 start.go:297] selected driver: docker
	I1204 23:54:40.663410  199650 start.go:901] validating driver "docker" against <nil>
	I1204 23:54:40.663423  199650 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:54:40.666875  199650 out.go:201] 
	W1204 23:54:40.669495  199650 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1204 23:54:40.672198  199650 out.go:201] 

                                                
                                                
** /stderr **
E1204 23:54:45.366241    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-147448 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-147448" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-147448

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-147448"

                                                
                                                
----------------------- debugLogs end: false-147448 [took: 5.59806998s] --------------------------------
helpers_test.go:175: Cleaning up "false-147448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-147448
--- PASS: TestNetworkPlugins/group/false (6.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (137.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1204 23:56:30.817017    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m17.404897985s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (137.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-066167 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c382fc9a-568c-4c03-b600-bbdd0e180459] Pending
helpers_test.go:344: "busybox" [c382fc9a-568c-4c03-b600-bbdd0e180459] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c382fc9a-568c-4c03-b600-bbdd0e180459] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00396979s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-066167 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-066167 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-066167 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.541977072s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-066167 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-066167 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-066167 --alsologtostderr -v=3: (12.408367938s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-013030 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-013030 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m14.60298835s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-066167 -n old-k8s-version-066167
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-066167 -n old-k8s-version-066167: exit status 7 (85.100331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-066167 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-013030 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4866fc69-2cde-4d41-afcf-028828c62fd0] Pending
helpers_test.go:344: "busybox" [4866fc69-2cde-4d41-afcf-028828c62fd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4866fc69-2cde-4d41-afcf-028828c62fd0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005641072s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-013030 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-013030 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-013030 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.189825255s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-013030 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-013030 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-013030 --alsologtostderr -v=3: (12.075371465s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-013030 -n no-preload-013030
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-013030 -n no-preload-013030: exit status 7 (84.692343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-013030 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-013030 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1205 00:01:30.816734    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:04:45.365923    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-013030 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (5m2.411623479s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-013030 -n no-preload-013030
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lgvv5" [c1d1353d-9647-4a18-a5e5-377b112cfc22] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004183996s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lgvv5" [c1d1353d-9647-4a18-a5e5-377b112cfc22] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005600088s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-066167 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-066167 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-066167 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-066167 -n old-k8s-version-066167
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-066167 -n old-k8s-version-066167: exit status 2 (438.732473ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-066167 -n old-k8s-version-066167
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-066167 -n old-k8s-version-066167: exit status 2 (379.884336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-066167 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-066167 -n old-k8s-version-066167
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-066167 -n old-k8s-version-066167
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5zxp4" [500b8458-2de2-42f4-966e-7ee1c8a0cd53] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003784682s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-603788 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-603788 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (58.079344643s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5zxp4" [500b8458-2de2-42f4-966e-7ee1c8a0cd53] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004770952s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-013030 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-013030 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-013030 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-013030 --alsologtostderr -v=1: (1.107327413s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-013030 -n no-preload-013030
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-013030 -n no-preload-013030: exit status 2 (413.869238ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-013030 -n no-preload-013030
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-013030 -n no-preload-013030: exit status 2 (385.849784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-013030 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-013030 -n no-preload-013030
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-013030 -n no-preload-013030
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-287023 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-287023 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (53.402888024s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-603788 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1070ff06-d2e5-4e19-886a-80012d0e9b85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1205 00:06:30.816994    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [1070ff06-d2e5-4e19-886a-80012d0e9b85] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005226797s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-603788 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-603788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-603788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046479049s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-603788 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-603788 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-603788 --alsologtostderr -v=3: (12.137385315s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-287023 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0fe901ca-ecc1-4f36-bda6-6aa86d5511f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0fe901ca-ecc1-4f36-bda6-6aa86d5511f9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004149213s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-287023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-287023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-287023 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-287023 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-287023 --alsologtostderr -v=3: (12.1763191s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-603788 -n embed-certs-603788
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-603788 -n embed-certs-603788: exit status 7 (121.569343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-603788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-603788 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-603788 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m56.979917784s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-603788 -n embed-certs-603788
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023: exit status 7 (128.50598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-287023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (275.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-287023 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1205 00:08:28.422862    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:28.429366    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:28.440753    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:28.462125    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:28.503515    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:28.585859    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:28.747354    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:29.069413    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:29.711395    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:30.993059    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:33.554378    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:38.675980    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:48.917278    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:09.399081    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:28.436937    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:45.366525    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:50.360588    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:58.776265    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:58.782668    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:58.794091    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:58.815465    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:58.856820    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:58.939487    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:59.101191    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:09:59.422582    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:10:00.075406    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:10:01.356755    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:10:03.919048    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:10:09.040614    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:10:19.282097    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:10:39.763556    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:11:12.282893    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:11:20.725562    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:11:30.816596    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-287023 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m34.86862642s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (275.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xzlt9" [d7b6a55e-72fc-47cb-84c9-4bdaeaf45e81] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00365908s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xzlt9" [d7b6a55e-72fc-47cb-84c9-4bdaeaf45e81] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004494157s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-287023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kxfdm" [3018edf2-1cac-4b42-9d8b-0a39a49cd4c9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004253181s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-287023 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-287023 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023: exit status 2 (313.66968ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023: exit status 2 (315.816437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-287023 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-287023 -n default-k8s-diff-port-287023
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kxfdm" [3018edf2-1cac-4b42-9d8b-0a39a49cd4c9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00402177s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-603788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-379916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-379916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (43.314572799s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-603788 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-603788 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-603788 --alsologtostderr -v=1: (1.328145099s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-603788 -n embed-certs-603788
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-603788 -n embed-certs-603788: exit status 2 (500.329136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-603788 -n embed-certs-603788
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-603788 -n embed-certs-603788: exit status 2 (386.955195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-603788 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-603788 -n embed-certs-603788
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-603788 -n embed-certs-603788
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (54.893980429s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-379916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-379916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.198518186s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-379916 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-379916 --alsologtostderr -v=3: (1.326293152s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-379916 -n newest-cni-379916
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-379916 -n newest-cni-379916: exit status 7 (80.363652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-379916 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-379916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1205 00:12:42.647578    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-379916 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (14.719040824s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-379916 -n newest-cni-379916
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-379916 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-379916 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-379916 -n newest-cni-379916
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-379916 -n newest-cni-379916: exit status 2 (324.365386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-379916 -n newest-cni-379916
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-379916 -n newest-cni-379916: exit status 2 (321.81562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-379916 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-379916 -n newest-cni-379916
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-379916 -n newest-cni-379916
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)
E1205 00:18:01.681678    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/default-k8s-diff-port-287023/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.003121    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.011079    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.022545    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.043871    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.085395    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.167306    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.329504    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:02.651211    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:03.292740    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:18:04.574431    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/auto-147448/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-147448 "pgrep -a kubelet"
I1205 00:13:01.632665    7736 config.go:182] Loaded profile config "auto-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-147448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8tqzw" [85dab237-0464-4d30-a2b5-7f5056c71a6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8tqzw" [85dab237-0464-4d30-a2b5-7f5056c71a6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004143496s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m5.951635304s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1205 00:13:56.124236    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m11.976448043s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-csjk9" [d7961d3a-8cb5-4cbd-9612-a5a3e5913379] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004603094s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-147448 "pgrep -a kubelet"
I1205 00:14:14.840996    7736 config.go:182] Loaded profile config "kindnet-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-147448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4vvp5" [51852dbb-701b-4306-8bad-b5c390641152] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4vvp5" [51852dbb-701b-4306-8bad-b5c390641152] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003871194s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rc4p6" [da5222a9-43b6-4c8b-b06e-c2837a44fb5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004604664s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.998424974s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-147448 "pgrep -a kubelet"
I1205 00:14:54.779246    7736 config.go:182] Loaded profile config "calico-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-147448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-44p7b" [17251865-a9db-4402-9558-3543d7cb93de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 00:14:58.775912    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-44p7b" [17251865-a9db-4402-9558-3543d7cb93de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00537339s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (41.942766626s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-147448 "pgrep -a kubelet"
I1205 00:15:44.599824    7736 config.go:182] Loaded profile config "custom-flannel-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-147448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-llckx" [4402299a-22cb-4d0c-997c-3be233b64d85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-llckx" [4402299a-22cb-4d0c-997c-3be233b64d85] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003995802s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-147448 "pgrep -a kubelet"
I1205 00:16:15.647015    7736 config.go:182] Loaded profile config "enable-default-cni-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-147448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gzqd7" [7c6842d4-699a-4852-b78d-583fe74aa131] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gzqd7" [7c6842d4-699a-4852-b78d-583fe74aa131] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006294599s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (50.398498688s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (45.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1205 00:17:00.237610    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/default-k8s-diff-port-287023/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-147448 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (45.159375082s)
--- PASS: TestNetworkPlugins/group/bridge/Start (45.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tk9rf" [1ac5bfb8-06df-46dd-a8fe-91f958e8c938] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004946407s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-147448 "pgrep -a kubelet"
I1205 00:17:16.500513    7736 config.go:182] Loaded profile config "flannel-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-147448 replace --force -f testdata/netcat-deployment.yaml
I1205 00:17:16.928038    7736 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fmmj8" [06208ec5-a2d3-4d8e-8e86-d036bfe52bb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 00:17:20.719680    7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/default-k8s-diff-port-287023/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fmmj8" [06208ec5-a2d3-4d8e-8e86-d036bfe52bb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00390436s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-147448 "pgrep -a kubelet"
I1205 00:17:36.487818    7736 config.go:182] Loaded profile config "bridge-147448": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-147448 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ftqwn" [a90b50b9-7af1-4529-8bbe-cd0aae31d9d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ftqwn" [a90b50b9-7af1-4529-8bbe-cd0aae31d9d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005576848s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-147448 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-147448 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-887350 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-887350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-887350
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-311939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-311939
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-147448 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-147448" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:54:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-773661
contexts:
- context:
cluster: force-systemd-flag-773661
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:54:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-773661
name: force-systemd-flag-773661
current-context: force-systemd-flag-773661
kind: Config
preferences: {}
users:
- name: force-systemd-flag-773661
user:
client-certificate: /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/force-systemd-flag-773661/client.crt
client-key: /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/force-systemd-flag-773661/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-147448

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-147448"

                                                
                                                
----------------------- debugLogs end: kubenet-147448 [took: 4.744527342s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-147448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-147448
--- SKIP: TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-147448 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-147448" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-147448

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-147448" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-147448"

                                                
                                                
----------------------- debugLogs end: cilium-147448 [took: 5.049789127s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-147448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-147448
--- SKIP: TestNetworkPlugins/group/cilium (5.24s)

                                                
                                    
Copied to clipboard