Test Report: Docker_Linux_containerd_arm64 19868

                    
                      7e440490692625b78ba9b7da2770c31edaec7633:2024-10-26:36808
                    
                

Test fail (1/330)

Order failed test Duration
304 TestStartStop/group/old-k8s-version/serial/SecondStart 375.85
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (375.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1026 01:32:09.673362 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:32:20.130066 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m11.437413658s)

                                                
                                                
-- stdout --
	* [old-k8s-version-368787] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-368787" primary control-plane node in "old-k8s-version-368787" cluster
	* Pulling base image v0.0.45-1729876044-19868 ...
	* Restarting existing docker container for "old-k8s-version-368787" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-368787 addons enable metrics-server
	
	* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:32:03.567361 2073170 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:32:03.567648 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:32:03.567676 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:32:03.567698 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:32:03.568030 2073170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 01:32:03.568501 2073170 out.go:352] Setting JSON to false
	I1026 01:32:03.569593 2073170 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":33274,"bootTime":1729873050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 01:32:03.569704 2073170 start.go:139] virtualization:  
	I1026 01:32:03.576988 2073170 out.go:177] * [old-k8s-version-368787] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1026 01:32:03.579892 2073170 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:32:03.579970 2073170 notify.go:220] Checking for updates...
	I1026 01:32:03.582336 2073170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:32:03.584542 2073170 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 01:32:03.586580 2073170 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 01:32:03.588476 2073170 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 01:32:03.590851 2073170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:32:03.593459 2073170 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1026 01:32:03.596009 2073170 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1026 01:32:03.598044 2073170 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:32:03.649391 2073170 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 01:32:03.649585 2073170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:32:03.752644 2073170 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:32:03.742525094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 01:32:03.752753 2073170 docker.go:318] overlay module found
	I1026 01:32:03.755258 2073170 out.go:177] * Using the docker driver based on existing profile
	I1026 01:32:03.757210 2073170 start.go:297] selected driver: docker
	I1026 01:32:03.757230 2073170 start.go:901] validating driver "docker" against &{Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:32:03.757346 2073170 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:32:03.758067 2073170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:32:03.874283 2073170 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:32:03.864453068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 01:32:03.874673 2073170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:32:03.874710 2073170 cni.go:84] Creating CNI manager for ""
	I1026 01:32:03.874755 2073170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 01:32:03.874793 2073170 start.go:340] cluster config:
	{Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:32:03.877155 2073170 out.go:177] * Starting "old-k8s-version-368787" primary control-plane node in "old-k8s-version-368787" cluster
	I1026 01:32:03.879021 2073170 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1026 01:32:03.880756 2073170 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1026 01:32:03.882953 2073170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1026 01:32:03.882986 2073170 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1026 01:32:03.883005 2073170 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1026 01:32:03.883032 2073170 cache.go:56] Caching tarball of preloaded images
	I1026 01:32:03.883121 2073170 preload.go:172] Found /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1026 01:32:03.883131 2073170 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1026 01:32:03.883244 2073170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/config.json ...
	I1026 01:32:03.922366 2073170 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1026 01:32:03.922393 2073170 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1026 01:32:03.922407 2073170 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:32:03.922432 2073170 start.go:360] acquireMachinesLock for old-k8s-version-368787: {Name:mk44d3baf3e6deb53ffd853750905e1ae52b8a7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:32:03.922498 2073170 start.go:364] duration metric: took 33.904µs to acquireMachinesLock for "old-k8s-version-368787"
	I1026 01:32:03.922525 2073170 start.go:96] Skipping create...Using existing machine configuration
	I1026 01:32:03.922533 2073170 fix.go:54] fixHost starting: 
	I1026 01:32:03.922806 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:03.957596 2073170 fix.go:112] recreateIfNeeded on old-k8s-version-368787: state=Stopped err=<nil>
	W1026 01:32:03.957634 2073170 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 01:32:03.960165 2073170 out.go:177] * Restarting existing docker container for "old-k8s-version-368787" ...
	I1026 01:32:03.962127 2073170 cli_runner.go:164] Run: docker start old-k8s-version-368787
	I1026 01:32:04.369526 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:04.392083 2073170 kic.go:430] container "old-k8s-version-368787" state is running.
	I1026 01:32:04.392506 2073170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-368787
	I1026 01:32:04.421945 2073170 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/config.json ...
	I1026 01:32:04.422186 2073170 machine.go:93] provisionDockerMachine start ...
	I1026 01:32:04.422247 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:04.457029 2073170 main.go:141] libmachine: Using SSH client type: native
	I1026 01:32:04.457292 2073170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 35304 <nil> <nil>}
	I1026 01:32:04.457302 2073170 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:32:04.458149 2073170 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42720->127.0.0.1:35304: read: connection reset by peer
	I1026 01:32:07.590810 2073170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-368787
	
	I1026 01:32:07.590833 2073170 ubuntu.go:169] provisioning hostname "old-k8s-version-368787"
	I1026 01:32:07.590938 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:07.610393 2073170 main.go:141] libmachine: Using SSH client type: native
	I1026 01:32:07.610662 2073170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 35304 <nil> <nil>}
	I1026 01:32:07.610681 2073170 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-368787 && echo "old-k8s-version-368787" | sudo tee /etc/hostname
	I1026 01:32:07.757056 2073170 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-368787
	
	I1026 01:32:07.757180 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:07.781219 2073170 main.go:141] libmachine: Using SSH client type: native
	I1026 01:32:07.781479 2073170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 35304 <nil> <nil>}
	I1026 01:32:07.781507 2073170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-368787' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-368787/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-368787' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:32:07.919293 2073170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:32:07.919394 2073170 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19868-1857747/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-1857747/.minikube}
	I1026 01:32:07.919429 2073170 ubuntu.go:177] setting up certificates
	I1026 01:32:07.919463 2073170 provision.go:84] configureAuth start
	I1026 01:32:07.919563 2073170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-368787
	I1026 01:32:07.942146 2073170 provision.go:143] copyHostCerts
	I1026 01:32:07.942219 2073170 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem, removing ...
	I1026 01:32:07.942234 2073170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem
	I1026 01:32:07.942309 2073170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem (1675 bytes)
	I1026 01:32:07.942404 2073170 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem, removing ...
	I1026 01:32:07.942408 2073170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem
	I1026 01:32:07.942433 2073170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem (1078 bytes)
	I1026 01:32:07.942529 2073170 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem, removing ...
	I1026 01:32:07.942534 2073170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem
	I1026 01:32:07.942556 2073170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem (1123 bytes)
	I1026 01:32:07.942605 2073170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-368787 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-368787]
	I1026 01:32:08.395094 2073170 provision.go:177] copyRemoteCerts
	I1026 01:32:08.395169 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:32:08.395216 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:08.411242 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:08.504767 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:32:08.542760 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 01:32:08.585116 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:32:08.614235 2073170 provision.go:87] duration metric: took 694.746008ms to configureAuth
	I1026 01:32:08.614267 2073170 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:32:08.614469 2073170 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1026 01:32:08.614483 2073170 machine.go:96] duration metric: took 4.192289872s to provisionDockerMachine
	I1026 01:32:08.614491 2073170 start.go:293] postStartSetup for "old-k8s-version-368787" (driver="docker")
	I1026 01:32:08.614502 2073170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:32:08.614559 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:32:08.614605 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:08.633350 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:08.726371 2073170 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:32:08.729573 2073170 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:32:08.729612 2073170 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:32:08.729628 2073170 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:32:08.729636 2073170 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1026 01:32:08.729647 2073170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/addons for local assets ...
	I1026 01:32:08.729710 2073170 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/files for local assets ...
	I1026 01:32:08.729794 2073170 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem -> 18643732.pem in /etc/ssl/certs
	I1026 01:32:08.729902 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:32:08.738426 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /etc/ssl/certs/18643732.pem (1708 bytes)
	I1026 01:32:08.766686 2073170 start.go:296] duration metric: took 152.178881ms for postStartSetup
	I1026 01:32:08.766778 2073170 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:32:08.766837 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:08.788043 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:08.876138 2073170 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:32:08.880709 2073170 fix.go:56] duration metric: took 4.958168785s for fixHost
	I1026 01:32:08.880738 2073170 start.go:83] releasing machines lock for "old-k8s-version-368787", held for 4.958226205s
	I1026 01:32:08.880811 2073170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-368787
	I1026 01:32:08.897764 2073170 ssh_runner.go:195] Run: cat /version.json
	I1026 01:32:08.897842 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:08.898108 2073170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:32:08.898180 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:08.920650 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:08.922221 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:09.013457 2073170 ssh_runner.go:195] Run: systemctl --version
	I1026 01:32:09.173576 2073170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:32:09.177993 2073170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1026 01:32:09.196148 2073170 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:32:09.196230 2073170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:32:09.205688 2073170 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 01:32:09.205721 2073170 start.go:495] detecting cgroup driver to use...
	I1026 01:32:09.205753 2073170 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 01:32:09.205800 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1026 01:32:09.228824 2073170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1026 01:32:09.244979 2073170 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:32:09.245052 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:32:09.269125 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:32:09.287831 2073170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:32:09.419060 2073170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:32:09.540597 2073170 docker.go:233] disabling docker service ...
	I1026 01:32:09.540722 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:32:09.558694 2073170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:32:09.575662 2073170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:32:09.691643 2073170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:32:09.813763 2073170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:32:09.828060 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:32:09.847871 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1026 01:32:09.860321 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1026 01:32:09.872079 2073170 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1026 01:32:09.872193 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1026 01:32:09.883427 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1026 01:32:09.895409 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1026 01:32:09.908189 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1026 01:32:09.919078 2073170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:32:09.930108 2073170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1026 01:32:09.942640 2073170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:32:09.954367 2073170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:32:09.966519 2073170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:32:10.105866 2073170 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1026 01:32:10.358469 2073170 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1026 01:32:10.358541 2073170 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1026 01:32:10.363485 2073170 start.go:563] Will wait 60s for crictl version
	I1026 01:32:10.363639 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:32:10.367187 2073170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:32:10.426897 2073170 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1026 01:32:10.427046 2073170 ssh_runner.go:195] Run: containerd --version
	I1026 01:32:10.458317 2073170 ssh_runner.go:195] Run: containerd --version
	I1026 01:32:10.486310 2073170 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1026 01:32:10.488290 2073170 cli_runner.go:164] Run: docker network inspect old-k8s-version-368787 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:32:10.508166 2073170 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 01:32:10.512429 2073170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:32:10.523128 2073170 kubeadm.go:883] updating cluster {Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:32:10.523257 2073170 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1026 01:32:10.523310 2073170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:32:10.588809 2073170 containerd.go:627] all images are preloaded for containerd runtime.
	I1026 01:32:10.588838 2073170 containerd.go:534] Images already preloaded, skipping extraction
	I1026 01:32:10.588902 2073170 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:32:10.667303 2073170 containerd.go:627] all images are preloaded for containerd runtime.
	I1026 01:32:10.667357 2073170 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:32:10.667366 2073170 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1026 01:32:10.667509 2073170 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-368787 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:32:10.667611 2073170 ssh_runner.go:195] Run: sudo crictl info
	I1026 01:32:10.763889 2073170 cni.go:84] Creating CNI manager for ""
	I1026 01:32:10.763915 2073170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 01:32:10.763926 2073170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:32:10.763951 2073170 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-368787 NodeName:old-k8s-version-368787 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1026 01:32:10.764084 2073170 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-368787"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:32:10.764154 2073170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1026 01:32:10.776843 2073170 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:32:10.776915 2073170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:32:10.787122 2073170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1026 01:32:10.810640 2073170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:32:10.833512 2073170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1026 01:32:10.862881 2073170 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 01:32:10.870107 2073170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:32:10.903812 2073170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:32:11.059110 2073170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:32:11.090969 2073170 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787 for IP: 192.168.76.2
	I1026 01:32:11.090992 2073170 certs.go:194] generating shared ca certs ...
	I1026 01:32:11.091009 2073170 certs.go:226] acquiring lock for ca certs: {Name:mkcea56562cecb76fcc8b6004959524ff574e9b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:32:11.091167 2073170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key
	I1026 01:32:11.091216 2073170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key
	I1026 01:32:11.091228 2073170 certs.go:256] generating profile certs ...
	I1026 01:32:11.091363 2073170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.key
	I1026 01:32:11.091440 2073170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/apiserver.key.8a4d58df
	I1026 01:32:11.091492 2073170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/proxy-client.key
	I1026 01:32:11.091607 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem (1338 bytes)
	W1026 01:32:11.091644 2073170 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373_empty.pem, impossibly tiny 0 bytes
	I1026 01:32:11.091655 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:32:11.091683 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem (1078 bytes)
	I1026 01:32:11.091715 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:32:11.091752 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem (1675 bytes)
	I1026 01:32:11.091805 2073170 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem (1708 bytes)
	I1026 01:32:11.092524 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:32:11.159947 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:32:11.233325 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:32:11.273907 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 01:32:11.304225 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 01:32:11.334396 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 01:32:11.364406 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:32:11.390986 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 01:32:11.417392 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:32:11.441659 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem --> /usr/share/ca-certificates/1864373.pem (1338 bytes)
	I1026 01:32:11.467041 2073170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /usr/share/ca-certificates/18643732.pem (1708 bytes)
	I1026 01:32:11.492813 2073170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:32:11.520467 2073170 ssh_runner.go:195] Run: openssl version
	I1026 01:32:11.526888 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:32:11.537955 2073170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:32:11.542243 2073170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:32:11.542386 2073170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:32:11.551494 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:32:11.562215 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1864373.pem && ln -fs /usr/share/ca-certificates/1864373.pem /etc/ssl/certs/1864373.pem"
	I1026 01:32:11.572923 2073170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1864373.pem
	I1026 01:32:11.577506 2073170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:51 /usr/share/ca-certificates/1864373.pem
	I1026 01:32:11.577626 2073170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1864373.pem
	I1026 01:32:11.585281 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1864373.pem /etc/ssl/certs/51391683.0"
	I1026 01:32:11.597021 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18643732.pem && ln -fs /usr/share/ca-certificates/18643732.pem /etc/ssl/certs/18643732.pem"
	I1026 01:32:11.611697 2073170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18643732.pem
	I1026 01:32:11.615085 2073170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:51 /usr/share/ca-certificates/18643732.pem
	I1026 01:32:11.615147 2073170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18643732.pem
	I1026 01:32:11.622225 2073170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18643732.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:32:11.631601 2073170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:32:11.635176 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 01:32:11.642386 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 01:32:11.651235 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 01:32:11.658900 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 01:32:11.666055 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 01:32:11.673012 2073170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 01:32:11.680109 2073170 kubeadm.go:392] StartCluster: {Name:old-k8s-version-368787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-368787 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:32:11.680221 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1026 01:32:11.680332 2073170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:32:11.724978 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:32:11.725044 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:32:11.725063 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:32:11.725074 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:32:11.725078 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:32:11.725086 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:32:11.725089 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:32:11.725092 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:32:11.725095 2073170 cri.go:89] found id: ""
	I1026 01:32:11.725149 2073170 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1026 01:32:11.738691 2073170 cri.go:116] JSON = null
	W1026 01:32:11.738747 2073170 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1026 01:32:11.738839 2073170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:32:11.749650 2073170 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1026 01:32:11.749675 2073170 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1026 01:32:11.749737 2073170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 01:32:11.760662 2073170 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 01:32:11.761096 2073170 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-368787" does not appear in /home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 01:32:11.761210 2073170 kubeconfig.go:62] /home/jenkins/minikube-integration/19868-1857747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-368787" cluster setting kubeconfig missing "old-k8s-version-368787" context setting]
	I1026 01:32:11.761527 2073170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/kubeconfig: {Name:mk1a434cd0cc84bfd2a4a232bfd16b0239e78299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:32:11.762915 2073170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 01:32:11.771752 2073170 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 01:32:11.771785 2073170 kubeadm.go:597] duration metric: took 22.102755ms to restartPrimaryControlPlane
	I1026 01:32:11.771795 2073170 kubeadm.go:394] duration metric: took 91.695709ms to StartCluster
	I1026 01:32:11.771810 2073170 settings.go:142] acquiring lock: {Name:mk5238870f54ce90633b3ed0ddcc81fb678d064e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:32:11.771874 2073170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 01:32:11.772485 2073170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/kubeconfig: {Name:mk1a434cd0cc84bfd2a4a232bfd16b0239e78299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:32:11.772681 2073170 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1026 01:32:11.773047 2073170 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1026 01:32:11.773067 2073170 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 01:32:11.773192 2073170 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-368787"
	I1026 01:32:11.773206 2073170 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-368787"
	I1026 01:32:11.773215 2073170 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-368787"
	I1026 01:32:11.773221 2073170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-368787"
	I1026 01:32:11.773224 2073170 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-368787"
	W1026 01:32:11.773231 2073170 addons.go:243] addon metrics-server should already be in state true
	I1026 01:32:11.773258 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
	I1026 01:32:11.773527 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:11.773657 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:11.773209 2073170 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-368787"
	W1026 01:32:11.773930 2073170 addons.go:243] addon storage-provisioner should already be in state true
	I1026 01:32:11.773956 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
	I1026 01:32:11.774371 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:11.778317 2073170 out.go:177] * Verifying Kubernetes components...
	I1026 01:32:11.778706 2073170 addons.go:69] Setting dashboard=true in profile "old-k8s-version-368787"
	I1026 01:32:11.778729 2073170 addons.go:234] Setting addon dashboard=true in "old-k8s-version-368787"
	W1026 01:32:11.778737 2073170 addons.go:243] addon dashboard should already be in state true
	I1026 01:32:11.778780 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
	I1026 01:32:11.779287 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:11.780556 2073170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:32:11.820678 2073170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:32:11.821780 2073170 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-368787"
	W1026 01:32:11.821799 2073170 addons.go:243] addon default-storageclass should already be in state true
	I1026 01:32:11.821825 2073170 host.go:66] Checking if "old-k8s-version-368787" exists ...
	I1026 01:32:11.826689 2073170 cli_runner.go:164] Run: docker container inspect old-k8s-version-368787 --format={{.State.Status}}
	I1026 01:32:11.830481 2073170 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:32:11.830503 2073170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 01:32:11.830567 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:11.844527 2073170 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1026 01:32:11.844670 2073170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 01:32:11.851632 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 01:32:11.851658 2073170 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 01:32:11.851723 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:11.854726 2073170 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 01:32:11.858441 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 01:32:11.858468 2073170 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 01:32:11.858542 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:11.879262 2073170 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 01:32:11.879283 2073170 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 01:32:11.879643 2073170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-368787
	I1026 01:32:11.895046 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:11.906810 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:11.910912 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:11.936785 2073170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35304 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/old-k8s-version-368787/id_rsa Username:docker}
	I1026 01:32:11.963820 2073170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:32:12.015026 2073170 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-368787" to be "Ready" ...
	I1026 01:32:12.075741 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:32:12.079911 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 01:32:12.079931 2073170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 01:32:12.137884 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 01:32:12.137963 2073170 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 01:32:12.140714 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 01:32:12.140783 2073170 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 01:32:12.161531 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:32:12.215078 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 01:32:12.215221 2073170 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 01:32:12.226102 2073170 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 01:32:12.226199 2073170 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 01:32:12.274048 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 01:32:12.309163 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 01:32:12.309273 2073170 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 01:32:12.357749 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 01:32:12.357822 2073170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 01:32:12.403263 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 01:32:12.403366 2073170 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1026 01:32:12.408667 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.408802 2073170 retry.go:31] will retry after 310.718992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:12.443246 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.443377 2073170 retry.go:31] will retry after 242.748817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.450179 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 01:32:12.450276 2073170 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1026 01:32:12.452912 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.453021 2073170 retry.go:31] will retry after 228.853978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.473007 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 01:32:12.473035 2073170 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 01:32:12.492138 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 01:32:12.492163 2073170 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 01:32:12.517237 2073170 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 01:32:12.517274 2073170 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 01:32:12.537740 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:12.633682 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.633714 2073170 retry.go:31] will retry after 295.010345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.682979 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 01:32:12.686370 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:32:12.719944 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 01:32:12.802406 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.802456 2073170 retry.go:31] will retry after 349.317562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:12.845179 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.845219 2073170 retry.go:31] will retry after 362.541488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:12.875425 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.875460 2073170 retry.go:31] will retry after 225.41973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:12.929651 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:13.017588 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.017679 2073170 retry.go:31] will retry after 326.956571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.101997 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:32:13.152472 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 01:32:13.208632 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 01:32:13.258868 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.258959 2073170 retry.go:31] will retry after 457.097198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:13.339025 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.339111 2073170 retry.go:31] will retry after 838.797017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.345212 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:13.351047 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.351137 2073170 retry.go:31] will retry after 752.009894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:13.439683 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.439719 2073170 retry.go:31] will retry after 838.127127ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.716979 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 01:32:13.818819 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:13.818853 2073170 retry.go:31] will retry after 745.949942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.016572 2073170 node_ready.go:53] error getting node "old-k8s-version-368787": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-368787": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 01:32:14.103895 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:32:14.178422 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1026 01:32:14.192606 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.192642 2073170 retry.go:31] will retry after 1.051748191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:14.270245 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.270323 2073170 retry.go:31] will retry after 428.664476ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.278496 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:14.397404 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.397451 2073170 retry.go:31] will retry after 968.409914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.565363 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:32:14.699933 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1026 01:32:14.787541 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.787590 2073170 retry.go:31] will retry after 1.554636804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:14.936864 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:14.936901 2073170 retry.go:31] will retry after 728.862459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:15.245130 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:32:15.366534 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:15.402051 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:15.402100 2073170 retry.go:31] will retry after 833.114051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:15.542313 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:15.542350 2073170 retry.go:31] will retry after 857.512374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:15.666713 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1026 01:32:15.804572 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:15.804611 2073170 retry.go:31] will retry after 2.707466245s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:16.235760 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 01:32:16.322988 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:16.323024 2073170 retry.go:31] will retry after 2.705849654s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:16.343250 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:32:16.400873 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:16.437288 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:16.437325 2073170 retry.go:31] will retry after 2.211013377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1026 01:32:16.499076 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:16.499114 2073170 retry.go:31] will retry after 1.172239395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:16.516601 2073170 node_ready.go:53] error getting node "old-k8s-version-368787": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-368787": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 01:32:17.672290 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:17.755271 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:17.755425 2073170 retry.go:31] will retry after 1.852126673s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:18.513042 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1026 01:32:18.586978 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:18.587017 2073170 retry.go:31] will retry after 3.925391068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:18.649384 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 01:32:18.734166 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:18.734202 2073170 retry.go:31] will retry after 1.759836158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:19.015874 2073170 node_ready.go:53] error getting node "old-k8s-version-368787": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-368787": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 01:32:19.029256 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1026 01:32:19.109954 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:19.109992 2073170 retry.go:31] will retry after 3.098320623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:19.608129 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1026 01:32:19.726372 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:19.726404 2073170 retry.go:31] will retry after 3.576047191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:20.494262 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1026 01:32:20.635207 2073170 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:20.635239 2073170 retry.go:31] will retry after 5.571537164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1026 01:32:22.209033 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1026 01:32:22.513410 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 01:32:23.302722 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 01:32:26.207194 2073170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 01:32:28.130809 2073170 node_ready.go:49] node "old-k8s-version-368787" has status "Ready":"True"
	I1026 01:32:28.130832 2073170 node_ready.go:38] duration metric: took 16.115712125s for node "old-k8s-version-368787" to be "Ready" ...
	I1026 01:32:28.130843 2073170 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:32:28.304565 2073170 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-q7ksx" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:28.406789 2073170 pod_ready.go:93] pod "coredns-74ff55c5b-q7ksx" in "kube-system" namespace has status "Ready":"True"
	I1026 01:32:28.406818 2073170 pod_ready.go:82] duration metric: took 102.164226ms for pod "coredns-74ff55c5b-q7ksx" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:28.406832 2073170 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:28.443881 2073170 pod_ready.go:93] pod "etcd-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
	I1026 01:32:28.443911 2073170 pod_ready.go:82] duration metric: took 37.070533ms for pod "etcd-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:28.443927 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:29.218578 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.009512798s)
	I1026 01:32:29.218734 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.705294931s)
	I1026 01:32:29.218764 2073170 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-368787"
	I1026 01:32:29.543645 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.336413373s)
	I1026 01:32:29.543753 2073170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.240997796s)
	I1026 01:32:29.545882 2073170 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-368787 addons enable metrics-server
	
	I1026 01:32:29.547642 2073170 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I1026 01:32:29.549529 2073170 addons.go:510] duration metric: took 17.776471913s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I1026 01:32:30.450495 2073170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:32.952429 2073170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:35.450253 2073170 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:36.950677 2073170 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
	I1026 01:32:36.950706 2073170 pod_ready.go:82] duration metric: took 8.50673388s for pod "kube-apiserver-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:36.950719 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:32:38.957321 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:41.459646 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:43.957579 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:45.962028 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:48.457308 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:50.458159 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:52.459046 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:54.958472 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:57.457937 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:32:59.458621 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:01.957173 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:03.958196 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:06.458219 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:08.462240 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:10.957381 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:13.459235 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:15.957470 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:18.457384 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:20.457905 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:22.958581 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:25.457350 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:27.458223 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:29.957557 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:32.456980 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:34.457323 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:36.458114 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:38.468892 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:40.956467 2073170 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:41.956496 2073170 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
	I1026 01:33:41.956523 2073170 pod_ready.go:82] duration metric: took 1m5.005795554s for pod "kube-controller-manager-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:33:41.956534 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9q264" in "kube-system" namespace to be "Ready" ...
	I1026 01:33:41.961562 2073170 pod_ready.go:93] pod "kube-proxy-9q264" in "kube-system" namespace has status "Ready":"True"
	I1026 01:33:41.961591 2073170 pod_ready.go:82] duration metric: took 5.049617ms for pod "kube-proxy-9q264" in "kube-system" namespace to be "Ready" ...
	I1026 01:33:41.961602 2073170 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:33:43.967942 2073170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:45.968308 2073170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:47.977225 2073170 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:48.967594 2073170 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace has status "Ready":"True"
	I1026 01:33:48.967619 2073170 pod_ready.go:82] duration metric: took 7.00600995s for pod "kube-scheduler-old-k8s-version-368787" in "kube-system" namespace to be "Ready" ...
	I1026 01:33:48.967630 2073170 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace to be "Ready" ...
	I1026 01:33:50.978010 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:52.978655 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:55.475643 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:57.476707 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:33:59.975150 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:02.475065 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:04.488646 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:06.978527 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:09.476165 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:11.975712 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:13.990814 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:16.479158 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:18.977978 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:20.978225 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:23.477539 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:25.974399 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:27.976880 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:29.980173 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:32.475115 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:34.478508 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:36.479994 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:38.983668 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:41.476950 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:43.485811 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:45.975127 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:47.975496 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:49.977110 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:52.476096 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:54.478172 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:56.977397 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:34:59.482319 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:01.974023 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:03.975465 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:05.977964 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:08.485332 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:10.974374 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:12.975363 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:14.977421 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:16.980582 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:19.475486 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:21.478176 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:23.978834 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:25.989106 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:28.486736 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:30.977452 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:32.979017 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:35.478130 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:37.975067 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:39.975943 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:41.979275 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:44.477363 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:46.479551 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:48.974097 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:50.978780 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:53.474551 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:55.474782 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:57.478975 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:35:59.975807 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:02.476744 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:04.976508 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:06.977878 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:09.477246 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:11.974267 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:13.978184 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:15.978303 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:17.978385 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:19.992616 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:22.476294 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:24.477658 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:26.979767 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:29.474149 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:31.474259 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:33.478228 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:35.977110 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:37.977162 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:40.477632 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:42.979661 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:45.475566 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:47.480122 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:49.975101 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:51.981869 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:54.479101 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:56.979657 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:36:59.476151 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:01.973872 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:03.974794 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:05.980099 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:08.476353 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:10.974308 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:13.473906 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:15.474149 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:17.474272 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:19.474878 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:21.481450 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:23.973530 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:26.474692 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:28.974421 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:31.474651 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:33.974680 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:36.477558 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:38.973325 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:40.978365 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:43.474016 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:45.475440 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:47.476030 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:48.981712 2073170 pod_ready.go:82] duration metric: took 4m0.014058258s for pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace to be "Ready" ...
	E1026 01:37:48.981744 2073170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1026 01:37:48.981801 2073170 pod_ready.go:39] duration metric: took 5m20.850945581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:37:48.981824 2073170 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:37:48.981925 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1026 01:37:48.982046 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 01:37:49.061661 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:37:49.061738 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:37:49.061758 2073170 cri.go:89] found id: ""
	I1026 01:37:49.061783 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
	I1026 01:37:49.061874 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.066064 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.070465 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1026 01:37:49.070527 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 01:37:49.152162 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:37:49.152183 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:37:49.152189 2073170 cri.go:89] found id: ""
	I1026 01:37:49.152196 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
	I1026 01:37:49.152250 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.157843 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.161728 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1026 01:37:49.161874 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 01:37:49.213678 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:37:49.213756 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:37:49.213776 2073170 cri.go:89] found id: ""
	I1026 01:37:49.213800 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
	I1026 01:37:49.213885 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.220177 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.232203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1026 01:37:49.232345 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 01:37:49.294557 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:37:49.294645 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:37:49.294665 2073170 cri.go:89] found id: ""
	I1026 01:37:49.294689 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
	I1026 01:37:49.294782 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.299146 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.303215 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1026 01:37:49.303357 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 01:37:49.350569 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:37:49.350646 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:37:49.350668 2073170 cri.go:89] found id: ""
	I1026 01:37:49.350691 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
	I1026 01:37:49.350780 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.356495 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.360987 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 01:37:49.361095 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 01:37:49.416682 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:37:49.416758 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:37:49.416778 2073170 cri.go:89] found id: ""
	I1026 01:37:49.416800 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
	I1026 01:37:49.416889 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.421667 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.425830 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1026 01:37:49.425971 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 01:37:49.476562 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:37:49.476639 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:37:49.476670 2073170 cri.go:89] found id: ""
	I1026 01:37:49.476691 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
	I1026 01:37:49.476777 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.481392 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.485639 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1026 01:37:49.485779 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 01:37:49.536284 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:37:49.536306 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:37:49.536312 2073170 cri.go:89] found id: ""
	I1026 01:37:49.536320 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
	I1026 01:37:49.536379 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.540772 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.545367 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 01:37:49.545440 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 01:37:49.595865 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:37:49.595886 2073170 cri.go:89] found id: ""
	I1026 01:37:49.595894 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
	I1026 01:37:49.595953 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.606230 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
	I1026 01:37:49.606256 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:37:49.660000 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
	I1026 01:37:49.660082 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:37:49.717276 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
	I1026 01:37:49.717309 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:37:49.815045 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
	I1026 01:37:49.815084 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:37:49.932109 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
	I1026 01:37:49.932149 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:37:50.002376 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
	I1026 01:37:50.002417 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:37:50.059980 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
	I1026 01:37:50.060057 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:37:50.142243 2073170 logs.go:123] Gathering logs for container status ...
	I1026 01:37:50.142278 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 01:37:50.273887 2073170 logs.go:123] Gathering logs for kubelet ...
	I1026 01:37:50.273926 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1026 01:37:50.400116 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.400368 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.400590 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205     658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.400798 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401019 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401237 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333     658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401657 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549     658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.409687 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.411310 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.414173 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.416401 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.416745 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.416936 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.417608 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.420461 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.421064 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.421398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.421588 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.421924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.422114 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.422719 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.422908 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.423246 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.425832 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.426182 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.426374 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.426713 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.426907 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.427535 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.427870 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.428130 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.428468 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.428662 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.428993 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.429180 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.429561 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.432106 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.432445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.432634 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.432982 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.433171 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.433771 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.433959 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.434293 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.434525 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.434861 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.435204 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.435398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.435735 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.435924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.436258 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.436447 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.436780 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.436968 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.437304 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.437492 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.437824 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.438014 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:37:50.438025 2073170 logs.go:123] Gathering logs for dmesg ...
	I1026 01:37:50.438040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 01:37:50.454757 2073170 logs.go:123] Gathering logs for describe nodes ...
	I1026 01:37:50.454785 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 01:37:50.669583 2073170 logs.go:123] Gathering logs for containerd ...
	I1026 01:37:50.669862 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1026 01:37:50.736640 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
	I1026 01:37:50.736718 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:37:50.791237 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
	I1026 01:37:50.791266 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:37:50.860038 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
	I1026 01:37:50.860076 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:37:50.936359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
	I1026 01:37:50.936407 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:37:51.078999 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
	I1026 01:37:51.079039 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:37:51.197002 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
	I1026 01:37:51.197040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:37:51.270252 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
	I1026 01:37:51.270281 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:37:51.351708 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
	I1026 01:37:51.351739 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:37:51.428214 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
	I1026 01:37:51.428289 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:37:51.480860 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
	I1026 01:37:51.480949 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:37:51.533094 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:37:51.533165 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1026 01:37:51.533239 2073170 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1026 01:37:51.533278 2073170 out.go:270]   Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:51.533314 2073170 out.go:270]   Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	  Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:51.533366 2073170 out.go:270]   Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:51.533403 2073170 out.go:270]   Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	  Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:51.533452 2073170 out.go:270]   Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:37:51.533488 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:37:51.533508 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:38:01.535291 2073170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:38:01.547514 2073170 api_server.go:72] duration metric: took 5m49.774798849s to wait for apiserver process to appear ...
	I1026 01:38:01.547541 2073170 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:38:01.547576 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1026 01:38:01.547632 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 01:38:01.587732 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:38:01.587754 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:38:01.587759 2073170 cri.go:89] found id: ""
	I1026 01:38:01.587766 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
	I1026 01:38:01.587828 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.592229 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.595984 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1026 01:38:01.596068 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 01:38:01.639841 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:38:01.639871 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:38:01.639876 2073170 cri.go:89] found id: ""
	I1026 01:38:01.639884 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
	I1026 01:38:01.639994 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.644607 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.648285 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1026 01:38:01.648362 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 01:38:01.720748 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:38:01.720774 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:38:01.720780 2073170 cri.go:89] found id: ""
	I1026 01:38:01.720787 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
	I1026 01:38:01.720846 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.726066 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.732857 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1026 01:38:01.732992 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 01:38:01.814967 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:38:01.814997 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:38:01.815005 2073170 cri.go:89] found id: ""
	I1026 01:38:01.815012 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
	I1026 01:38:01.815203 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.819665 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.826464 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1026 01:38:01.826610 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 01:38:01.897678 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:38:01.897708 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:38:01.897714 2073170 cri.go:89] found id: ""
	I1026 01:38:01.897727 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
	I1026 01:38:01.897878 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.922934 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.928999 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 01:38:01.929123 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 01:38:02.046457 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:38:02.046487 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:38:02.046498 2073170 cri.go:89] found id: ""
	I1026 01:38:02.046512 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
	I1026 01:38:02.046624 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.067786 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.076203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1026 01:38:02.076352 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 01:38:02.150567 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:38:02.150612 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:38:02.150617 2073170 cri.go:89] found id: ""
	I1026 01:38:02.150673 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
	I1026 01:38:02.150774 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.156731 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.163096 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 01:38:02.163254 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 01:38:02.248045 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:38:02.248072 2073170 cri.go:89] found id: ""
	I1026 01:38:02.248081 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
	I1026 01:38:02.248231 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.258094 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1026 01:38:02.258253 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 01:38:02.359394 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:38:02.359428 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:38:02.359433 2073170 cri.go:89] found id: ""
	I1026 01:38:02.359441 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
	I1026 01:38:02.359696 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.368425 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.375386 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
	I1026 01:38:02.375416 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:38:02.483267 2073170 logs.go:123] Gathering logs for dmesg ...
	I1026 01:38:02.483431 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 01:38:02.539716 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
	I1026 01:38:02.539755 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:38:02.733373 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
	I1026 01:38:02.733425 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:38:02.854359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
	I1026 01:38:02.854394 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:38:02.955435 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
	I1026 01:38:02.955469 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:38:03.040330 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
	I1026 01:38:03.040364 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:38:03.184875 2073170 logs.go:123] Gathering logs for container status ...
	I1026 01:38:03.184928 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 01:38:03.308598 2073170 logs.go:123] Gathering logs for kubelet ...
	I1026 01:38:03.308637 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1026 01:38:03.395084 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.395487 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.395746 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205     658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.395995 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.396249 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.396495 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333     658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.396759 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.397012 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549     658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.405224 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.406935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.410090 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.412301 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.412690 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.412911 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.413709 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.416720 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.417382 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.417786 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.418027 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.418426 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.418683 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.419380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.419604 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.419986 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.422699 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.423152 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.423380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.423781 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.423994 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.424632 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.425085 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.425345 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.425722 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.425954 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.426317 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.426524 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.426905 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.429672 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.430063 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.430283 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.430667 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.430891 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.431531 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.431751 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.432125 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.432342 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.432691 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.433042 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.433355 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.433731 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.433935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.434295 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.434516 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.434912 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.435166 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.435545 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.435770 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.436139 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.436351 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.436716 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.436953 2073170 logs.go:138] Found kubelet problem: Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:38:03.436967 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
	I1026 01:38:03.436992 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:38:03.527806 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
	I1026 01:38:03.527843 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:38:03.598581 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
	I1026 01:38:03.598756 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:38:03.677581 2073170 logs.go:123] Gathering logs for containerd ...
	I1026 01:38:03.677658 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1026 01:38:03.753106 2073170 logs.go:123] Gathering logs for describe nodes ...
	I1026 01:38:03.753195 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 01:38:03.997226 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
	I1026 01:38:03.997300 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:38:04.087455 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
	I1026 01:38:04.087550 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:38:04.175664 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
	I1026 01:38:04.175745 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:38:04.270341 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
	I1026 01:38:04.270371 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:38:04.370143 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
	I1026 01:38:04.370175 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:38:04.447078 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
	I1026 01:38:04.447109 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:38:04.545939 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
	I1026 01:38:04.545976 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:38:04.715996 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
	I1026 01:38:04.716021 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:38:04.880261 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:38:04.880333 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1026 01:38:04.880402 2073170 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1026 01:38:04.880449 2073170 out.go:270]   Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:04.880486 2073170 out.go:270]   Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	  Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:04.880529 2073170 out.go:270]   Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:04.880562 2073170 out.go:270]   Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	  Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:04.880596 2073170 out.go:270]   Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:38:04.880641 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:38:04.880663 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:38:14.881298 2073170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 01:38:14.898252 2073170 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 01:38:14.902356 2073170 out.go:201] 
	W1026 01:38:14.905153 2073170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1026 01:38:14.905189 2073170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1026 01:38:14.905207 2073170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1026 01:38:14.905214 2073170 out.go:270] * 
	* 
	W1026 01:38:14.906019 2073170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:38:14.907947 2073170 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-368787
helpers_test.go:235: (dbg) docker inspect old-k8s-version-368787:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a",
	        "Created": "2024-10-26T01:29:00.83665828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2073366,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-26T01:32:04.112953704Z",
	            "FinishedAt": "2024-10-26T01:32:02.727115138Z"
	        },
	        "Image": "sha256:e536a13478ac3e12b0286f2242f0931e32c32cc3eeb0139a219c9133dcd9fe90",
	        "ResolvConfPath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/hosts",
	        "LogPath": "/var/lib/docker/containers/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a/7dcafe5f5b3f62ed5d1c908bcc436d14a94693b305cb7d4ff1191fa0e9d60b8a-json.log",
	        "Name": "/old-k8s-version-368787",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-368787:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-368787",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a-init/diff:/var/lib/docker/overlay2/438660a3bbbc35bff890f07029ce43b51006aa7672592e2474721b86d466905b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/729fddbdd51de18b1d80fbfbb0e03fea5a6c4b3b58ef10c9fc1272371176757a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-368787",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-368787/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-368787",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-368787",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-368787",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28b72d80bcb33e3bd32ecc0ef53a2eea2452efad336a6f8f183b5299baafc8df",
	            "SandboxKey": "/var/run/docker/netns/28b72d80bcb3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35304"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35305"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35308"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35306"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35307"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-368787": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "394804f4b2b3ec80d8f10c173dead534a044bceba117e946f47c8188d66bbc41",
	                    "EndpointID": "74e1e1a0c1fa2a6b4c7f84eea92f4b45a30ad475d47d0cfc7bb454892fa0c2d2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-368787",
	                        "7dcafe5f5b3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368787 -n old-k8s-version-368787
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-368787 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-368787 logs -n 25: (2.805980027s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-335477                              | cert-expiration-335477       | jenkins | v1.34.0 | 26 Oct 24 01:27 UTC | 26 Oct 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-env-968413                               | force-systemd-env-968413     | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-968413                            | force-systemd-env-968413     | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
	| start   | -p cert-options-712326                                 | cert-options-712326          | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-712326 ssh                                | cert-options-712326          | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-712326 -- sudo                         | cert-options-712326          | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-712326                                 | cert-options-712326          | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:28 UTC |
	| start   | -p old-k8s-version-368787                              | old-k8s-version-368787       | jenkins | v1.34.0 | 26 Oct 24 01:28 UTC | 26 Oct 24 01:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-335477                              | cert-expiration-335477       | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:31 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-335477                              | cert-expiration-335477       | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:31 UTC |
	| start   | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:32 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-368787        | old-k8s-version-368787       | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-368787                              | old-k8s-version-368787       | jenkins | v1.34.0 | 26 Oct 24 01:31 UTC | 26 Oct 24 01:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-368787             | old-k8s-version-368787       | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-368787                              | old-k8s-version-368787       | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-314480  | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-314480       | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:32 UTC | 26 Oct 24 01:37 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-314480                           | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-314480 | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC | 26 Oct 24 01:37 UTC |
	|         | default-k8s-diff-port-314480                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-892584                                  | embed-certs-892584           | jenkins | v1.34.0 | 26 Oct 24 01:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 01:37:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 01:37:39.366044 2083289 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:37:39.366275 2083289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:37:39.366303 2083289 out.go:358] Setting ErrFile to fd 2...
	I1026 01:37:39.366328 2083289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:37:39.366603 2083289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 01:37:39.367097 2083289 out.go:352] Setting JSON to false
	I1026 01:37:39.368165 2083289 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":33610,"bootTime":1729873050,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 01:37:39.368345 2083289 start.go:139] virtualization:  
	I1026 01:37:39.371050 2083289 out.go:177] * [embed-certs-892584] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1026 01:37:39.373115 2083289 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:37:39.373203 2083289 notify.go:220] Checking for updates...
	I1026 01:37:39.377234 2083289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:37:39.379109 2083289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 01:37:39.380969 2083289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 01:37:39.382869 2083289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 01:37:39.385168 2083289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:37:39.387487 2083289 config.go:182] Loaded profile config "old-k8s-version-368787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1026 01:37:39.387597 2083289 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:37:39.412884 2083289 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 01:37:39.413032 2083289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:37:39.484598 2083289 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:37:39.473684666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 01:37:39.484720 2083289 docker.go:318] overlay module found
	I1026 01:37:39.486825 2083289 out.go:177] * Using the docker driver based on user configuration
	I1026 01:37:39.488729 2083289 start.go:297] selected driver: docker
	I1026 01:37:39.488748 2083289 start.go:901] validating driver "docker" against <nil>
	I1026 01:37:39.488763 2083289 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:37:39.489516 2083289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:37:39.541410 2083289 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:37:39.531208758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 01:37:39.541621 2083289 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 01:37:39.541862 2083289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 01:37:39.544391 2083289 out.go:177] * Using Docker driver with root privileges
	I1026 01:37:39.546618 2083289 cni.go:84] Creating CNI manager for ""
	I1026 01:37:39.546686 2083289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 01:37:39.546704 2083289 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 01:37:39.546789 2083289 start.go:340] cluster config:
	{Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:37:39.549015 2083289 out.go:177] * Starting "embed-certs-892584" primary control-plane node in "embed-certs-892584" cluster
	I1026 01:37:39.550806 2083289 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1026 01:37:39.552880 2083289 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
	I1026 01:37:39.555068 2083289 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1026 01:37:39.555128 2083289 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1026 01:37:39.555164 2083289 cache.go:56] Caching tarball of preloaded images
	I1026 01:37:39.555158 2083289 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1026 01:37:39.555250 2083289 preload.go:172] Found /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1026 01:37:39.555260 2083289 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
	I1026 01:37:39.555478 2083289 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/config.json ...
	I1026 01:37:39.555568 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/config.json: {Name:mk779949728dad0ca65fc40f5c31f9b716a262de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:39.574544 2083289 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
	I1026 01:37:39.574570 2083289 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
	I1026 01:37:39.574585 2083289 cache.go:194] Successfully downloaded all kic artifacts
	I1026 01:37:39.574608 2083289 start.go:360] acquireMachinesLock for embed-certs-892584: {Name:mk4b48d59e38b37e589663d987ea35cd2a3247dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 01:37:39.574714 2083289 start.go:364] duration metric: took 86.828µs to acquireMachinesLock for "embed-certs-892584"
	I1026 01:37:39.574758 2083289 start.go:93] Provisioning new machine with config: &{Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1026 01:37:39.574837 2083289 start.go:125] createHost starting for "" (driver="docker")
	I1026 01:37:38.973325 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:40.978365 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:43.474016 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:39.579441 2083289 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1026 01:37:39.579702 2083289 start.go:159] libmachine.API.Create for "embed-certs-892584" (driver="docker")
	I1026 01:37:39.579747 2083289 client.go:168] LocalClient.Create starting
	I1026 01:37:39.579824 2083289 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem
	I1026 01:37:39.579866 2083289 main.go:141] libmachine: Decoding PEM data...
	I1026 01:37:39.579885 2083289 main.go:141] libmachine: Parsing certificate...
	I1026 01:37:39.579948 2083289 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem
	I1026 01:37:39.579971 2083289 main.go:141] libmachine: Decoding PEM data...
	I1026 01:37:39.579990 2083289 main.go:141] libmachine: Parsing certificate...
	I1026 01:37:39.580375 2083289 cli_runner.go:164] Run: docker network inspect embed-certs-892584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 01:37:39.596561 2083289 cli_runner.go:211] docker network inspect embed-certs-892584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 01:37:39.596649 2083289 network_create.go:284] running [docker network inspect embed-certs-892584] to gather additional debugging logs...
	I1026 01:37:39.596670 2083289 cli_runner.go:164] Run: docker network inspect embed-certs-892584
	W1026 01:37:39.614879 2083289 cli_runner.go:211] docker network inspect embed-certs-892584 returned with exit code 1
	I1026 01:37:39.614911 2083289 network_create.go:287] error running [docker network inspect embed-certs-892584]: docker network inspect embed-certs-892584: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-892584 not found
	I1026 01:37:39.614932 2083289 network_create.go:289] output of [docker network inspect embed-certs-892584]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-892584 not found
	
	** /stderr **
	I1026 01:37:39.615044 2083289 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:37:39.633774 2083289 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b80904004ad6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8f:a1:c9:9e} reservation:<nil>}
	I1026 01:37:39.634431 2083289 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2dec2bba0dc7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:57:02:36:e1} reservation:<nil>}
	I1026 01:37:39.635160 2083289 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b1c506f42330 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:70:55:89:c3} reservation:<nil>}
	I1026 01:37:39.635751 2083289 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-394804f4b2b3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c0:39:57:49} reservation:<nil>}
	I1026 01:37:39.636452 2083289 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a10830}
	I1026 01:37:39.636486 2083289 network_create.go:124] attempt to create docker network embed-certs-892584 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 01:37:39.636579 2083289 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-892584 embed-certs-892584
	I1026 01:37:39.724688 2083289 network_create.go:108] docker network embed-certs-892584 192.168.85.0/24 created
	I1026 01:37:39.724725 2083289 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-892584" container
	I1026 01:37:39.724814 2083289 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 01:37:39.744469 2083289 cli_runner.go:164] Run: docker volume create embed-certs-892584 --label name.minikube.sigs.k8s.io=embed-certs-892584 --label created_by.minikube.sigs.k8s.io=true
	I1026 01:37:39.761304 2083289 oci.go:103] Successfully created a docker volume embed-certs-892584
	I1026 01:37:39.761409 2083289 cli_runner.go:164] Run: docker run --rm --name embed-certs-892584-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-892584 --entrypoint /usr/bin/test -v embed-certs-892584:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
	I1026 01:37:40.485625 2083289 oci.go:107] Successfully prepared a docker volume embed-certs-892584
	I1026 01:37:40.485683 2083289 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1026 01:37:40.485704 2083289 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 01:37:40.485782 2083289 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-892584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 01:37:45.475440 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:47.476030 2073170 pod_ready.go:103] pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace has status "Ready":"False"
	I1026 01:37:44.952039 2083289 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-892584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (4.466213841s)
	I1026 01:37:44.952085 2083289 kic.go:203] duration metric: took 4.466377741s to extract preloaded images to volume ...
	W1026 01:37:44.952287 2083289 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1026 01:37:44.952439 2083289 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 01:37:45.084690 2083289 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-892584 --name embed-certs-892584 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-892584 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-892584 --network embed-certs-892584 --ip 192.168.85.2 --volume embed-certs-892584:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
	I1026 01:37:45.613009 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Running}}
	I1026 01:37:45.632128 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
	I1026 01:37:45.655571 2083289 cli_runner.go:164] Run: docker exec embed-certs-892584 stat /var/lib/dpkg/alternatives/iptables
	I1026 01:37:45.719287 2083289 oci.go:144] the created container "embed-certs-892584" has a running status.
	I1026 01:37:45.719353 2083289 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa...
	I1026 01:37:46.461894 2083289 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 01:37:46.505189 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
	I1026 01:37:46.533159 2083289 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 01:37:46.533178 2083289 kic_runner.go:114] Args: [docker exec --privileged embed-certs-892584 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 01:37:46.619570 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
	I1026 01:37:46.637355 2083289 machine.go:93] provisionDockerMachine start ...
	I1026 01:37:46.637455 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:46.655818 2083289 main.go:141] libmachine: Using SSH client type: native
	I1026 01:37:46.656127 2083289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 35314 <nil> <nil>}
	I1026 01:37:46.656145 2083289 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 01:37:46.795475 2083289 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-892584
	
	I1026 01:37:46.795503 2083289 ubuntu.go:169] provisioning hostname "embed-certs-892584"
	I1026 01:37:46.795592 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:46.814914 2083289 main.go:141] libmachine: Using SSH client type: native
	I1026 01:37:46.815168 2083289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 35314 <nil> <nil>}
	I1026 01:37:46.815188 2083289 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-892584 && echo "embed-certs-892584" | sudo tee /etc/hostname
	I1026 01:37:46.985337 2083289 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-892584
	
	I1026 01:37:46.985427 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:47.013204 2083289 main.go:141] libmachine: Using SSH client type: native
	I1026 01:37:47.013459 2083289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 35314 <nil> <nil>}
	I1026 01:37:47.013484 2083289 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-892584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-892584/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-892584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 01:37:47.155471 2083289 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 01:37:47.155541 2083289 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19868-1857747/.minikube CaCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19868-1857747/.minikube}
	I1026 01:37:47.155591 2083289 ubuntu.go:177] setting up certificates
	I1026 01:37:47.155614 2083289 provision.go:84] configureAuth start
	I1026 01:37:47.155698 2083289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-892584
	I1026 01:37:47.173892 2083289 provision.go:143] copyHostCerts
	I1026 01:37:47.173967 2083289 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem, removing ...
	I1026 01:37:47.173982 2083289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem
	I1026 01:37:47.174061 2083289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.pem (1078 bytes)
	I1026 01:37:47.174424 2083289 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem, removing ...
	I1026 01:37:47.174441 2083289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem
	I1026 01:37:47.174480 2083289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/cert.pem (1123 bytes)
	I1026 01:37:47.174566 2083289 exec_runner.go:144] found /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem, removing ...
	I1026 01:37:47.174572 2083289 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem
	I1026 01:37:47.174602 2083289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19868-1857747/.minikube/key.pem (1675 bytes)
	I1026 01:37:47.174680 2083289 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-892584 san=[127.0.0.1 192.168.85.2 embed-certs-892584 localhost minikube]
	I1026 01:37:47.679481 2083289 provision.go:177] copyRemoteCerts
	I1026 01:37:47.679551 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 01:37:47.679599 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:47.696244 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
	I1026 01:37:47.793250 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 01:37:47.819673 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 01:37:47.844892 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 01:37:47.869236 2083289 provision.go:87] duration metric: took 713.596584ms to configureAuth
	I1026 01:37:47.869262 2083289 ubuntu.go:193] setting minikube options for container-runtime
	I1026 01:37:47.869451 2083289 config.go:182] Loaded profile config "embed-certs-892584": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 01:37:47.869459 2083289 machine.go:96] duration metric: took 1.232081114s to provisionDockerMachine
	I1026 01:37:47.869465 2083289 client.go:171] duration metric: took 8.289708081s to LocalClient.Create
	I1026 01:37:47.869488 2083289 start.go:167] duration metric: took 8.289786899s to libmachine.API.Create "embed-certs-892584"
	I1026 01:37:47.869498 2083289 start.go:293] postStartSetup for "embed-certs-892584" (driver="docker")
	I1026 01:37:47.869507 2083289 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 01:37:47.869562 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 01:37:47.869603 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:47.887902 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
	I1026 01:37:47.988607 2083289 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 01:37:47.992000 2083289 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 01:37:47.992037 2083289 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 01:37:47.992048 2083289 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 01:37:47.992055 2083289 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1026 01:37:47.992069 2083289 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/addons for local assets ...
	I1026 01:37:47.992135 2083289 filesync.go:126] Scanning /home/jenkins/minikube-integration/19868-1857747/.minikube/files for local assets ...
	I1026 01:37:47.992214 2083289 filesync.go:149] local asset: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem -> 18643732.pem in /etc/ssl/certs
	I1026 01:37:47.992323 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 01:37:48.002636 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /etc/ssl/certs/18643732.pem (1708 bytes)
	I1026 01:37:48.035498 2083289 start.go:296] duration metric: took 165.984021ms for postStartSetup
	I1026 01:37:48.035962 2083289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-892584
	I1026 01:37:48.060588 2083289 profile.go:143] Saving config to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/config.json ...
	I1026 01:37:48.060905 2083289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:37:48.060958 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:48.080090 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
	I1026 01:37:48.176629 2083289 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 01:37:48.181852 2083289 start.go:128] duration metric: took 8.606998108s to createHost
	I1026 01:37:48.181877 2083289 start.go:83] releasing machines lock for "embed-certs-892584", held for 8.607148674s
	I1026 01:37:48.181954 2083289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-892584
	I1026 01:37:48.198709 2083289 ssh_runner.go:195] Run: cat /version.json
	I1026 01:37:48.198762 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:48.198999 2083289 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 01:37:48.199072 2083289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-892584
	I1026 01:37:48.215393 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
	I1026 01:37:48.233594 2083289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35314 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/embed-certs-892584/id_rsa Username:docker}
	I1026 01:37:48.303172 2083289 ssh_runner.go:195] Run: systemctl --version
	I1026 01:37:48.468633 2083289 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 01:37:48.475581 2083289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1026 01:37:48.501613 2083289 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1026 01:37:48.501697 2083289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 01:37:48.532698 2083289 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 01:37:48.532723 2083289 start.go:495] detecting cgroup driver to use...
	I1026 01:37:48.532756 2083289 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1026 01:37:48.532808 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1026 01:37:48.545587 2083289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1026 01:37:48.557746 2083289 docker.go:217] disabling cri-docker service (if available) ...
	I1026 01:37:48.557816 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 01:37:48.571885 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 01:37:48.587883 2083289 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 01:37:48.682685 2083289 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 01:37:48.776738 2083289 docker.go:233] disabling docker service ...
	I1026 01:37:48.776841 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 01:37:48.800961 2083289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 01:37:48.813232 2083289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 01:37:48.899310 2083289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 01:37:49.008549 2083289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 01:37:49.021756 2083289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 01:37:49.044890 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1026 01:37:49.057489 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1026 01:37:49.069891 2083289 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1026 01:37:49.069994 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1026 01:37:49.082119 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1026 01:37:49.094646 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1026 01:37:49.105712 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1026 01:37:49.119169 2083289 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 01:37:49.128521 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1026 01:37:49.140876 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1026 01:37:49.154741 2083289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1026 01:37:49.166815 2083289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 01:37:49.177951 2083289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 01:37:49.192118 2083289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:37:49.315011 2083289 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1026 01:37:49.517180 2083289 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1026 01:37:49.517286 2083289 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1026 01:37:49.521791 2083289 start.go:563] Will wait 60s for crictl version
	I1026 01:37:49.521891 2083289 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.525986 2083289 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 01:37:49.598803 2083289 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1026 01:37:49.598903 2083289 ssh_runner.go:195] Run: containerd --version
	I1026 01:37:49.631393 2083289 ssh_runner.go:195] Run: containerd --version
	I1026 01:37:49.672197 2083289 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
	I1026 01:37:49.674262 2083289 cli_runner.go:164] Run: docker network inspect embed-certs-892584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 01:37:49.702595 2083289 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 01:37:49.707901 2083289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:37:49.724910 2083289 kubeadm.go:883] updating cluster {Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 01:37:49.725074 2083289 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1026 01:37:49.725154 2083289 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:37:49.810836 2083289 containerd.go:627] all images are preloaded for containerd runtime.
	I1026 01:37:49.810944 2083289 containerd.go:534] Images already preloaded, skipping extraction
	I1026 01:37:49.811079 2083289 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 01:37:49.877764 2083289 containerd.go:627] all images are preloaded for containerd runtime.
	I1026 01:37:49.877786 2083289 cache_images.go:84] Images are preloaded, skipping loading
	I1026 01:37:49.877793 2083289 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
	I1026 01:37:49.877900 2083289 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-892584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 01:37:49.877971 2083289 ssh_runner.go:195] Run: sudo crictl info
	I1026 01:37:49.938813 2083289 cni.go:84] Creating CNI manager for ""
	I1026 01:37:49.938886 2083289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 01:37:49.938921 2083289 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1026 01:37:49.938982 2083289 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-892584 NodeName:embed-certs-892584 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 01:37:49.939166 2083289 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-892584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 01:37:49.939304 2083289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1026 01:37:49.960744 2083289 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 01:37:49.960881 2083289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 01:37:49.974039 2083289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1026 01:37:49.993345 2083289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 01:37:50.018112 2083289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1026 01:37:50.049463 2083289 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 01:37:50.053790 2083289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 01:37:50.069678 2083289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:37:50.201876 2083289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 01:37:50.216932 2083289 certs.go:68] Setting up /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584 for IP: 192.168.85.2
	I1026 01:37:50.217009 2083289 certs.go:194] generating shared ca certs ...
	I1026 01:37:50.217043 2083289 certs.go:226] acquiring lock for ca certs: {Name:mkcea56562cecb76fcc8b6004959524ff574e9b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:50.217272 2083289 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key
	I1026 01:37:50.217353 2083289 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key
	I1026 01:37:50.217388 2083289 certs.go:256] generating profile certs ...
	I1026 01:37:50.217488 2083289 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.key
	I1026 01:37:50.217522 2083289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.crt with IP's: []
	I1026 01:37:50.831745 2083289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.crt ...
	I1026 01:37:50.831858 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.crt: {Name:mk231d5785b52be9398c1cd11c69cb093a17dc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:50.832108 2083289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.key ...
	I1026 01:37:50.832172 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/client.key: {Name:mk63457e82094ff3f2b63a9f1b335d0baeaf01a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:50.832799 2083289 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078
	I1026 01:37:50.832867 2083289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 01:37:51.315630 2083289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078 ...
	I1026 01:37:51.315726 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078: {Name:mkbfc441fec043b099535fe54c9453350d9e1e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:51.316427 2083289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078 ...
	I1026 01:37:51.316474 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078: {Name:mk90ba137d89d0cae34618f463703703a6c235d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:51.316630 2083289 certs.go:381] copying /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt.b5a2e078 -> /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt
	I1026 01:37:51.316763 2083289 certs.go:385] copying /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key.b5a2e078 -> /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key
	I1026 01:37:51.316869 2083289 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key
	I1026 01:37:51.316906 2083289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt with IP's: []
	I1026 01:37:51.738864 2083289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt ...
	I1026 01:37:51.738899 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt: {Name:mk2dea1ddde2f81e3c925cc3f4e1f3443347385f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:51.739545 2083289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key ...
	I1026 01:37:51.739563 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key: {Name:mka770a543cd600bbccfe52856f0b475fa9e82da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:37:51.739770 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem (1338 bytes)
	W1026 01:37:51.739815 2083289 certs.go:480] ignoring /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373_empty.pem, impossibly tiny 0 bytes
	I1026 01:37:51.739834 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 01:37:51.739859 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/ca.pem (1078 bytes)
	I1026 01:37:51.739885 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/cert.pem (1123 bytes)
	I1026 01:37:51.739910 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/key.pem (1675 bytes)
	I1026 01:37:51.739956 2083289 certs.go:484] found cert: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem (1708 bytes)
	I1026 01:37:51.740596 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 01:37:51.767431 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1026 01:37:51.794866 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 01:37:51.821067 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1026 01:37:51.846652 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 01:37:51.871596 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 01:37:51.896887 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 01:37:51.932056 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/embed-certs-892584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 01:37:51.960375 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/certs/1864373.pem --> /usr/share/ca-certificates/1864373.pem (1338 bytes)
	I1026 01:37:51.994124 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/ssl/certs/18643732.pem --> /usr/share/ca-certificates/18643732.pem (1708 bytes)
	I1026 01:37:52.030165 2083289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19868-1857747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 01:37:52.058438 2083289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 01:37:52.077514 2083289 ssh_runner.go:195] Run: openssl version
	I1026 01:37:52.086511 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1864373.pem && ln -fs /usr/share/ca-certificates/1864373.pem /etc/ssl/certs/1864373.pem"
	I1026 01:37:52.097803 2083289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1864373.pem
	I1026 01:37:52.101748 2083289 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:51 /usr/share/ca-certificates/1864373.pem
	I1026 01:37:52.101889 2083289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1864373.pem
	I1026 01:37:52.115147 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1864373.pem /etc/ssl/certs/51391683.0"
	I1026 01:37:52.124952 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18643732.pem && ln -fs /usr/share/ca-certificates/18643732.pem /etc/ssl/certs/18643732.pem"
	I1026 01:37:52.134472 2083289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18643732.pem
	I1026 01:37:52.138151 2083289 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:51 /usr/share/ca-certificates/18643732.pem
	I1026 01:37:52.138257 2083289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18643732.pem
	I1026 01:37:52.145360 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18643732.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 01:37:52.154996 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 01:37:52.164524 2083289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:37:52.168090 2083289 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:37:52.168178 2083289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 01:37:52.175455 2083289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 01:37:52.184952 2083289 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 01:37:52.188791 2083289 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 01:37:52.188843 2083289 kubeadm.go:392] StartCluster: {Name:embed-certs-892584 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-892584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 01:37:52.188931 2083289 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1026 01:37:52.188991 2083289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 01:37:52.233565 2083289 cri.go:89] found id: ""
	I1026 01:37:52.233709 2083289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 01:37:52.242963 2083289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 01:37:52.253243 2083289 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 01:37:52.253315 2083289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 01:37:52.262290 2083289 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 01:37:52.262363 2083289 kubeadm.go:157] found existing configuration files:
	
	I1026 01:37:52.262436 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 01:37:52.271182 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 01:37:52.271276 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 01:37:52.280164 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 01:37:52.289372 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 01:37:52.289462 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 01:37:52.298266 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 01:37:52.308092 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 01:37:52.308183 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 01:37:52.317501 2083289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 01:37:52.326786 2083289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 01:37:52.326908 2083289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 01:37:52.335564 2083289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 01:37:52.407097 2083289 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1026 01:37:52.407264 2083289 kubeadm.go:310] [preflight] Running pre-flight checks
	I1026 01:37:52.430200 2083289 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1026 01:37:52.430309 2083289 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-aws
	I1026 01:37:52.430369 2083289 kubeadm.go:310] OS: Linux
	I1026 01:37:52.430446 2083289 kubeadm.go:310] CGROUPS_CPU: enabled
	I1026 01:37:52.430526 2083289 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1026 01:37:52.430602 2083289 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1026 01:37:52.430674 2083289 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1026 01:37:52.430749 2083289 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1026 01:37:52.430826 2083289 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1026 01:37:52.430900 2083289 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1026 01:37:52.431017 2083289 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1026 01:37:52.431092 2083289 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1026 01:37:52.502421 2083289 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 01:37:52.502538 2083289 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 01:37:52.502635 2083289 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 01:37:52.511740 2083289 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 01:37:48.981712 2073170 pod_ready.go:82] duration metric: took 4m0.014058258s for pod "metrics-server-9975d5f86-v2pwf" in "kube-system" namespace to be "Ready" ...
	E1026 01:37:48.981744 2073170 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1026 01:37:48.981801 2073170 pod_ready.go:39] duration metric: took 5m20.850945581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1026 01:37:48.981824 2073170 api_server.go:52] waiting for apiserver process to appear ...
	I1026 01:37:48.981925 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1026 01:37:48.982046 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 01:37:49.061661 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:37:49.061738 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:37:49.061758 2073170 cri.go:89] found id: ""
	I1026 01:37:49.061783 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
	I1026 01:37:49.061874 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.066064 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.070465 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1026 01:37:49.070527 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 01:37:49.152162 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:37:49.152183 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:37:49.152189 2073170 cri.go:89] found id: ""
	I1026 01:37:49.152196 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
	I1026 01:37:49.152250 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.157843 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.161728 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1026 01:37:49.161874 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 01:37:49.213678 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:37:49.213756 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:37:49.213776 2073170 cri.go:89] found id: ""
	I1026 01:37:49.213800 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
	I1026 01:37:49.213885 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.220177 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.232203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1026 01:37:49.232345 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 01:37:49.294557 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:37:49.294645 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:37:49.294665 2073170 cri.go:89] found id: ""
	I1026 01:37:49.294689 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
	I1026 01:37:49.294782 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.299146 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.303215 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1026 01:37:49.303357 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 01:37:49.350569 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:37:49.350646 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:37:49.350668 2073170 cri.go:89] found id: ""
	I1026 01:37:49.350691 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
	I1026 01:37:49.350780 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.356495 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.360987 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 01:37:49.361095 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 01:37:49.416682 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:37:49.416758 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:37:49.416778 2073170 cri.go:89] found id: ""
	I1026 01:37:49.416800 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
	I1026 01:37:49.416889 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.421667 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.425830 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1026 01:37:49.425971 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 01:37:49.476562 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:37:49.476639 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:37:49.476670 2073170 cri.go:89] found id: ""
	I1026 01:37:49.476691 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
	I1026 01:37:49.476777 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.481392 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.485639 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1026 01:37:49.485779 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 01:37:49.536284 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:37:49.536306 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:37:49.536312 2073170 cri.go:89] found id: ""
	I1026 01:37:49.536320 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
	I1026 01:37:49.536379 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.540772 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.545367 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 01:37:49.545440 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 01:37:49.595865 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:37:49.595886 2073170 cri.go:89] found id: ""
	I1026 01:37:49.595894 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
	I1026 01:37:49.595953 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:37:49.606230 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
	I1026 01:37:49.606256 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:37:49.660000 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
	I1026 01:37:49.660082 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:37:49.717276 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
	I1026 01:37:49.717309 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:37:49.815045 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
	I1026 01:37:49.815084 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:37:49.932109 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
	I1026 01:37:49.932149 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:37:50.002376 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
	I1026 01:37:50.002417 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:37:50.059980 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
	I1026 01:37:50.060057 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:37:50.142243 2073170 logs.go:123] Gathering logs for container status ...
	I1026 01:37:50.142278 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 01:37:50.273887 2073170 logs.go:123] Gathering logs for kubelet ...
	I1026 01:37:50.273926 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1026 01:37:50.400116 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.400368 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.400590 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205     658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.400798 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401019 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401237 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333     658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.401657 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549     658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:37:50.409687 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.411310 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.414173 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.416401 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.416745 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.416936 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.417608 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.420461 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.421064 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.421398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.421588 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.421924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.422114 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.422719 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.422908 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.423246 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.425832 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.426182 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.426374 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.426713 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.426907 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.427535 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.427870 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.428130 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.428468 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.428662 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.428993 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.429180 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.429561 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.432106 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:37:50.432445 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.432634 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.432982 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.433171 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.433771 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.433959 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.434293 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.434525 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.434861 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.435204 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.435398 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.435735 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.435924 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.436258 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.436447 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.436780 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.436968 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.437304 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.437492 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:50.437824 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:50.438014 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:37:50.438025 2073170 logs.go:123] Gathering logs for dmesg ...
	I1026 01:37:50.438040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 01:37:50.454757 2073170 logs.go:123] Gathering logs for describe nodes ...
	I1026 01:37:50.454785 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 01:37:50.669583 2073170 logs.go:123] Gathering logs for containerd ...
	I1026 01:37:50.669862 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1026 01:37:50.736640 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
	I1026 01:37:50.736718 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:37:50.791237 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
	I1026 01:37:50.791266 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:37:50.860038 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
	I1026 01:37:50.860076 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:37:50.936359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
	I1026 01:37:50.936407 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:37:51.078999 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
	I1026 01:37:51.079039 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:37:51.197002 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
	I1026 01:37:51.197040 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:37:51.270252 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
	I1026 01:37:51.270281 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:37:51.351708 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
	I1026 01:37:51.351739 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:37:51.428214 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
	I1026 01:37:51.428289 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:37:51.480860 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
	I1026 01:37:51.480949 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:37:51.533094 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:37:51.533165 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1026 01:37:51.533239 2073170 out.go:270] X Problems detected in kubelet:
	W1026 01:37:51.533278 2073170 out.go:270]   Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:51.533314 2073170 out.go:270]   Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:51.533366 2073170 out.go:270]   Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:37:51.533403 2073170 out.go:270]   Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:37:51.533452 2073170 out.go:270]   Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:37:51.533488 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:37:51.533508 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:37:52.514893 2083289 out.go:235]   - Generating certificates and keys ...
	I1026 01:37:52.515003 2083289 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1026 01:37:52.515101 2083289 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1026 01:37:53.126048 2083289 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 01:37:53.592328 2083289 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1026 01:37:53.973998 2083289 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1026 01:37:54.863814 2083289 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1026 01:37:55.080365 2083289 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1026 01:37:55.080975 2083289 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-892584 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 01:37:55.444224 2083289 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1026 01:37:55.444527 2083289 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-892584 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 01:37:55.877422 2083289 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 01:37:56.552980 2083289 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 01:37:57.272878 2083289 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1026 01:37:57.273190 2083289 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 01:37:57.941688 2083289 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 01:37:58.397786 2083289 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 01:37:58.657250 2083289 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 01:37:59.135480 2083289 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 01:37:59.572599 2083289 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 01:37:59.573142 2083289 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 01:37:59.576141 2083289 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 01:38:01.535291 2073170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:38:01.547514 2073170 api_server.go:72] duration metric: took 5m49.774798849s to wait for apiserver process to appear ...
	I1026 01:38:01.547541 2073170 api_server.go:88] waiting for apiserver healthz status ...
	I1026 01:38:01.547576 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1026 01:38:01.547632 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 01:38:01.587732 2073170 cri.go:89] found id: "caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:38:01.587754 2073170 cri.go:89] found id: "ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:38:01.587759 2073170 cri.go:89] found id: ""
	I1026 01:38:01.587766 2073170 logs.go:282] 2 containers: [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d]
	I1026 01:38:01.587828 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.592229 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.595984 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1026 01:38:01.596068 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 01:38:01.639841 2073170 cri.go:89] found id: "3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:38:01.639871 2073170 cri.go:89] found id: "19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:38:01.639876 2073170 cri.go:89] found id: ""
	I1026 01:38:01.639884 2073170 logs.go:282] 2 containers: [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272]
	I1026 01:38:01.639994 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.644607 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.648285 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1026 01:38:01.648362 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 01:38:01.720748 2073170 cri.go:89] found id: "c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:38:01.720774 2073170 cri.go:89] found id: "3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:38:01.720780 2073170 cri.go:89] found id: ""
	I1026 01:38:01.720787 2073170 logs.go:282] 2 containers: [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e]
	I1026 01:38:01.720846 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.726066 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.732857 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1026 01:38:01.732992 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 01:38:01.814967 2073170 cri.go:89] found id: "9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:38:01.814997 2073170 cri.go:89] found id: "4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:38:01.815005 2073170 cri.go:89] found id: ""
	I1026 01:38:01.815012 2073170 logs.go:282] 2 containers: [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7]
	I1026 01:38:01.815203 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.819665 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.826464 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1026 01:38:01.826610 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 01:38:01.897678 2073170 cri.go:89] found id: "f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:38:01.897708 2073170 cri.go:89] found id: "79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:38:01.897714 2073170 cri.go:89] found id: ""
	I1026 01:38:01.897727 2073170 logs.go:282] 2 containers: [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670]
	I1026 01:38:01.897878 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.922934 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:01.928999 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 01:38:01.929123 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 01:38:02.046457 2073170 cri.go:89] found id: "407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:38:02.046487 2073170 cri.go:89] found id: "5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:38:02.046498 2073170 cri.go:89] found id: ""
	I1026 01:38:02.046512 2073170 logs.go:282] 2 containers: [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526]
	I1026 01:38:02.046624 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.067786 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.076203 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1026 01:38:02.076352 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 01:38:02.150567 2073170 cri.go:89] found id: "19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:38:02.150612 2073170 cri.go:89] found id: "720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:38:02.150617 2073170 cri.go:89] found id: ""
	I1026 01:38:02.150673 2073170 logs.go:282] 2 containers: [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b]
	I1026 01:38:02.150774 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.156731 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.163096 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1026 01:38:02.163254 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1026 01:38:02.248045 2073170 cri.go:89] found id: "ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:38:02.248072 2073170 cri.go:89] found id: ""
	I1026 01:38:02.248081 2073170 logs.go:282] 1 containers: [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125]
	I1026 01:38:02.248231 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.258094 2073170 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1026 01:38:02.258253 2073170 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 01:38:02.359394 2073170 cri.go:89] found id: "f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:38:02.359428 2073170 cri.go:89] found id: "3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:38:02.359433 2073170 cri.go:89] found id: ""
	I1026 01:38:02.359441 2073170 logs.go:282] 2 containers: [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad]
	I1026 01:38:02.359696 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.368425 2073170 ssh_runner.go:195] Run: which crictl
	I1026 01:38:02.375386 2073170 logs.go:123] Gathering logs for storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] ...
	I1026 01:38:02.375416 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad"
	I1026 01:38:02.483267 2073170 logs.go:123] Gathering logs for dmesg ...
	I1026 01:38:02.483431 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 01:38:02.539716 2073170 logs.go:123] Gathering logs for kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] ...
	I1026 01:38:02.539755 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9"
	I1026 01:38:02.733373 2073170 logs.go:123] Gathering logs for kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] ...
	I1026 01:38:02.733425 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52"
	I1026 01:38:02.854359 2073170 logs.go:123] Gathering logs for kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] ...
	I1026 01:38:02.854394 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7"
	I1026 01:38:02.955435 2073170 logs.go:123] Gathering logs for kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] ...
	I1026 01:38:02.955469 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670"
	I1026 01:38:03.040330 2073170 logs.go:123] Gathering logs for kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] ...
	I1026 01:38:03.040364 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526"
	I1026 01:38:03.184875 2073170 logs.go:123] Gathering logs for container status ...
	I1026 01:38:03.184928 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 01:38:03.308598 2073170 logs.go:123] Gathering logs for kubelet ...
	I1026 01:38:03.308637 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1026 01:38:03.395084 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142066     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-44wvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-44wvw" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.395487 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142157     658 reflector.go:138] object-"kube-system"/"metrics-server-token-7tsjh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-7tsjh" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.395746 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142205     658 reflector.go:138] object-"kube-system"/"coredns-token-n94ql": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n94ql" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.395995 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142249     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.396249 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142293     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-47vp6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-47vp6" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.396495 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142333     658 reflector.go:138] object-"kube-system"/"kindnet-token-qqrpm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-qqrpm" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.396759 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142465     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.397012 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:28 old-k8s-version-368787 kubelet[658]: E1026 01:32:28.142549     658 reflector.go:138] object-"default"/"default-token-2jcx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2jcx9" is forbidden: User "system:node:old-k8s-version-368787" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-368787' and this object
	W1026 01:38:03.405224 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.113479     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.406935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:30 old-k8s-version-368787 kubelet[658]: E1026 01:32:30.907637     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.410090 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:44 old-k8s-version-368787 kubelet[658]: E1026 01:32:44.742023     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.412301 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:56 old-k8s-version-368787 kubelet[658]: E1026 01:32:56.075801     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.412690 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.080022     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.412911 2073170 logs.go:138] Found kubelet problem: Oct 26 01:32:57 old-k8s-version-368787 kubelet[658]: E1026 01:32:57.735904     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.413709 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:01 old-k8s-version-368787 kubelet[658]: E1026 01:33:01.507025     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.416720 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:11 old-k8s-version-368787 kubelet[658]: E1026 01:33:11.743711     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.417382 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:17 old-k8s-version-368787 kubelet[658]: E1026 01:33:17.172767     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.417786 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:21 old-k8s-version-368787 kubelet[658]: E1026 01:33:21.507449     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.418027 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:24 old-k8s-version-368787 kubelet[658]: E1026 01:33:24.731672     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.418426 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:32 old-k8s-version-368787 kubelet[658]: E1026 01:33:32.731878     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.418683 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:37 old-k8s-version-368787 kubelet[658]: E1026 01:33:37.731969     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.419380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:46 old-k8s-version-368787 kubelet[658]: E1026 01:33:46.262782     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.419604 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:48 old-k8s-version-368787 kubelet[658]: E1026 01:33:48.732324     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.419986 2073170 logs.go:138] Found kubelet problem: Oct 26 01:33:51 old-k8s-version-368787 kubelet[658]: E1026 01:33:51.507083     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.422699 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:03 old-k8s-version-368787 kubelet[658]: E1026 01:34:03.750208     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.423152 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:06 old-k8s-version-368787 kubelet[658]: E1026 01:34:06.731790     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.423380 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:18 old-k8s-version-368787 kubelet[658]: E1026 01:34:18.732360     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.423781 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:21 old-k8s-version-368787 kubelet[658]: E1026 01:34:21.731670     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.423994 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:33 old-k8s-version-368787 kubelet[658]: E1026 01:34:33.732041     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.424632 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:37 old-k8s-version-368787 kubelet[658]: E1026 01:34:37.414157     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.425085 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:41 old-k8s-version-368787 kubelet[658]: E1026 01:34:41.507110     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.425345 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:48 old-k8s-version-368787 kubelet[658]: E1026 01:34:48.731821     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.425722 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:54 old-k8s-version-368787 kubelet[658]: E1026 01:34:54.731233     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.425954 2073170 logs.go:138] Found kubelet problem: Oct 26 01:34:59 old-k8s-version-368787 kubelet[658]: E1026 01:34:59.732434     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.426317 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:06 old-k8s-version-368787 kubelet[658]: E1026 01:35:06.731705     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.426524 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:12 old-k8s-version-368787 kubelet[658]: E1026 01:35:12.731827     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.426905 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:19 old-k8s-version-368787 kubelet[658]: E1026 01:35:19.732195     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.429672 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:25 old-k8s-version-368787 kubelet[658]: E1026 01:35:25.742123     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1026 01:38:03.430063 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:33 old-k8s-version-368787 kubelet[658]: E1026 01:35:33.731192     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.430283 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:40 old-k8s-version-368787 kubelet[658]: E1026 01:35:40.736836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.430667 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:48 old-k8s-version-368787 kubelet[658]: E1026 01:35:48.731218     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.430891 2073170 logs.go:138] Found kubelet problem: Oct 26 01:35:53 old-k8s-version-368787 kubelet[658]: E1026 01:35:53.733617     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.431531 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:03 old-k8s-version-368787 kubelet[658]: E1026 01:36:03.662574     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.431751 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:06 old-k8s-version-368787 kubelet[658]: E1026 01:36:06.731650     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.432125 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:11 old-k8s-version-368787 kubelet[658]: E1026 01:36:11.507131     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.432342 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:21 old-k8s-version-368787 kubelet[658]: E1026 01:36:21.731783     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.432691 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:23 old-k8s-version-368787 kubelet[658]: E1026 01:36:23.731690     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.433042 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:34 old-k8s-version-368787 kubelet[658]: E1026 01:36:34.731309     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.433355 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.433731 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.433935 2073170 logs.go:138] Found kubelet problem: Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.434295 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.434516 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.434912 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.435166 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.435545 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.435770 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.436139 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.436351 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:03.436716 2073170 logs.go:138] Found kubelet problem: Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:03.436953 2073170 logs.go:138] Found kubelet problem: Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:38:03.436967 2073170 logs.go:123] Gathering logs for etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] ...
	I1026 01:38:03.436992 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace"
	I1026 01:38:03.527806 2073170 logs.go:123] Gathering logs for etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] ...
	I1026 01:38:03.527843 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272"
	I1026 01:37:59.578612 2083289 out.go:235]   - Booting up control plane ...
	I1026 01:37:59.578714 2083289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 01:37:59.578795 2083289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 01:37:59.579360 2083289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 01:37:59.591006 2083289 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 01:37:59.597806 2083289 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 01:37:59.598119 2083289 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1026 01:37:59.707933 2083289 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 01:37:59.708054 2083289 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 01:38:01.211210 2083289 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.508303346s
	I1026 01:38:01.211297 2083289 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1026 01:38:03.598581 2073170 logs.go:123] Gathering logs for kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] ...
	I1026 01:38:03.598756 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b"
	I1026 01:38:03.677581 2073170 logs.go:123] Gathering logs for containerd ...
	I1026 01:38:03.677658 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1026 01:38:03.753106 2073170 logs.go:123] Gathering logs for describe nodes ...
	I1026 01:38:03.753195 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 01:38:03.997226 2073170 logs.go:123] Gathering logs for coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] ...
	I1026 01:38:03.997300 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e"
	I1026 01:38:04.087455 2073170 logs.go:123] Gathering logs for kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] ...
	I1026 01:38:04.087550 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044"
	I1026 01:38:04.175664 2073170 logs.go:123] Gathering logs for kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] ...
	I1026 01:38:04.175745 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a"
	I1026 01:38:04.270341 2073170 logs.go:123] Gathering logs for kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] ...
	I1026 01:38:04.270371 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125"
	I1026 01:38:04.370143 2073170 logs.go:123] Gathering logs for storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] ...
	I1026 01:38:04.370175 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb"
	I1026 01:38:04.447078 2073170 logs.go:123] Gathering logs for kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] ...
	I1026 01:38:04.447109 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d"
	I1026 01:38:04.545939 2073170 logs.go:123] Gathering logs for coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] ...
	I1026 01:38:04.545976 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7"
	I1026 01:38:04.715996 2073170 logs.go:123] Gathering logs for kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] ...
	I1026 01:38:04.716021 2073170 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab"
	I1026 01:38:04.880261 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:38:04.880333 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1026 01:38:04.880402 2073170 out.go:270] X Problems detected in kubelet:
	W1026 01:38:04.880449 2073170 out.go:270]   Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:04.880486 2073170 out.go:270]   Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:04.880529 2073170 out.go:270]   Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1026 01:38:04.880562 2073170 out.go:270]   Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	W1026 01:38:04.880596 2073170 out.go:270]   Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1026 01:38:04.880641 2073170 out.go:358] Setting ErrFile to fd 2...
	I1026 01:38:04.880663 2073170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:38:09.713285 2083289 kubeadm.go:310] [api-check] The API server is healthy after 8.501988454s
	I1026 01:38:09.742782 2083289 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 01:38:09.764250 2083289 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 01:38:09.802945 2083289 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 01:38:09.803158 2083289 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-892584 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 01:38:09.825295 2083289 kubeadm.go:310] [bootstrap-token] Using token: u6tbb6.u2rwpec4etemhweo
	I1026 01:38:09.827578 2083289 out.go:235]   - Configuring RBAC rules ...
	I1026 01:38:09.827753 2083289 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 01:38:09.837608 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 01:38:09.850935 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 01:38:09.855277 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 01:38:09.861058 2083289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 01:38:09.868001 2083289 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 01:38:10.125197 2083289 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 01:38:10.548060 2083289 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1026 01:38:11.121677 2083289 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1026 01:38:11.124611 2083289 kubeadm.go:310] 
	I1026 01:38:11.124704 2083289 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1026 01:38:11.124720 2083289 kubeadm.go:310] 
	I1026 01:38:11.124803 2083289 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1026 01:38:11.124813 2083289 kubeadm.go:310] 
	I1026 01:38:11.124843 2083289 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1026 01:38:11.127909 2083289 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 01:38:11.128007 2083289 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 01:38:11.128042 2083289 kubeadm.go:310] 
	I1026 01:38:11.128112 2083289 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1026 01:38:11.128122 2083289 kubeadm.go:310] 
	I1026 01:38:11.128179 2083289 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 01:38:11.128187 2083289 kubeadm.go:310] 
	I1026 01:38:11.128253 2083289 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1026 01:38:11.128371 2083289 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 01:38:11.128472 2083289 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 01:38:11.128484 2083289 kubeadm.go:310] 
	I1026 01:38:11.128580 2083289 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 01:38:11.128678 2083289 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1026 01:38:11.128686 2083289 kubeadm.go:310] 
	I1026 01:38:11.128794 2083289 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u6tbb6.u2rwpec4etemhweo \
	I1026 01:38:11.128937 2083289 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:312d9d71d8954a92713e020be0abaacd15647d9767bbc020c5ae409bd78f03a2 \
	I1026 01:38:11.128981 2083289 kubeadm.go:310] 	--control-plane 
	I1026 01:38:11.128993 2083289 kubeadm.go:310] 
	I1026 01:38:11.129084 2083289 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1026 01:38:11.129094 2083289 kubeadm.go:310] 
	I1026 01:38:11.129186 2083289 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u6tbb6.u2rwpec4etemhweo \
	I1026 01:38:11.129305 2083289 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:312d9d71d8954a92713e020be0abaacd15647d9767bbc020c5ae409bd78f03a2 
	I1026 01:38:11.135282 2083289 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-aws\n", err: exit status 1
	I1026 01:38:11.135454 2083289 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 01:38:11.135486 2083289 cni.go:84] Creating CNI manager for ""
	I1026 01:38:11.135497 2083289 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 01:38:11.138849 2083289 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1026 01:38:11.140970 2083289 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 01:38:11.145098 2083289 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1026 01:38:11.145119 2083289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 01:38:11.164644 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 01:38:11.530096 2083289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 01:38:11.530176 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:11.530232 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-892584 minikube.k8s.io/updated_at=2024_10_26T01_38_11_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=embed-certs-892584 minikube.k8s.io/primary=true
	I1026 01:38:11.751741 2083289 ops.go:34] apiserver oom_adj: -16
	I1026 01:38:11.751866 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:12.251949 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:12.751990 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:13.251995 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:13.751942 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:14.252702 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:14.752004 2083289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 01:38:14.846755 2083289 kubeadm.go:1113] duration metric: took 3.316651442s to wait for elevateKubeSystemPrivileges
	I1026 01:38:14.846789 2083289 kubeadm.go:394] duration metric: took 22.657949564s to StartCluster
	I1026 01:38:14.846808 2083289 settings.go:142] acquiring lock: {Name:mk5238870f54ce90633b3ed0ddcc81fb678d064e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:38:14.846873 2083289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 01:38:14.848338 2083289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19868-1857747/kubeconfig: {Name:mk1a434cd0cc84bfd2a4a232bfd16b0239e78299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 01:38:14.848565 2083289 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1026 01:38:14.848667 2083289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 01:38:14.848910 2083289 config.go:182] Loaded profile config "embed-certs-892584": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 01:38:14.848950 2083289 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 01:38:14.849036 2083289 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-892584"
	I1026 01:38:14.849083 2083289 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-892584"
	I1026 01:38:14.849111 2083289 host.go:66] Checking if "embed-certs-892584" exists ...
	I1026 01:38:14.849083 2083289 addons.go:69] Setting default-storageclass=true in profile "embed-certs-892584"
	I1026 01:38:14.849187 2083289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-892584"
	I1026 01:38:14.849498 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
	I1026 01:38:14.849563 2083289 cli_runner.go:164] Run: docker container inspect embed-certs-892584 --format={{.State.Status}}
	I1026 01:38:14.852124 2083289 out.go:177] * Verifying Kubernetes components...
	I1026 01:38:14.854196 2083289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 01:38:14.881298 2073170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 01:38:14.898252 2073170 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 01:38:14.900214 2083289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 01:38:14.902356 2073170 out.go:201] 
	W1026 01:38:14.905153 2073170 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1026 01:38:14.905189 2073170 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1026 01:38:14.905207 2073170 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1026 01:38:14.905214 2073170 out.go:270] * 
	W1026 01:38:14.906019 2073170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 01:38:14.907947 2073170 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6cd756151e892       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   8bcf177240c81       dashboard-metrics-scraper-8d5bb5db8-w4mwk
	ed8fe83be8b1e       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   30b684a6011b9       kubernetes-dashboard-cd95d586-zbljx
	19f64e2c8ba4c       0bcd66b03df5f       5 minutes ago       Running             kindnet-cni                 1                   a6d65de0c3d26       kindnet-5vwks
	f4444a86e1f19       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   b3190ae24fe90       storage-provisioner
	c8ce92c2bee0e       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   6c70d2eb24895       coredns-74ff55c5b-q7ksx
	9bd96eb6d5a7e       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   3622b58680707       busybox
	f8701160de76e       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   64c136539e749       kube-proxy-9q264
	9e91002c8dfb9       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   5dae65cc1dd59       kube-scheduler-old-k8s-version-368787
	407cc3b1c2340       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   7c1ec6ea72a39       kube-controller-manager-old-k8s-version-368787
	3e88cb5ec2163       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   d761b03d84898       etcd-old-k8s-version-368787
	caf4499d19d56       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   a43731704d1a0       kube-apiserver-old-k8s-version-368787
	c04d640227914       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   066dab64f949f       busybox
	3f79400ea7617       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   920fdb26a0937       coredns-74ff55c5b-q7ksx
	3765e18684825       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   59f0dfedddaa9       storage-provisioner
	720cfd17791b3       0bcd66b03df5f       8 minutes ago       Exited              kindnet-cni                 0                   10224234d2ce3       kindnet-5vwks
	79f5f9136e040       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   83ca250b827cc       kube-proxy-9q264
	4cf9033bc9607       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   799d89ab603b2       kube-scheduler-old-k8s-version-368787
	ee5aa1f2e06d3       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   e72a21e82106b       kube-apiserver-old-k8s-version-368787
	5605b568cc91e       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   84b13c66fd5b5       kube-controller-manager-old-k8s-version-368787
	19176bbdf5c5a       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   976f3bb9124e3       etcd-old-k8s-version-368787
	
	
	==> containerd <==
	Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.764656284Z" level=info msg="CreateContainer within sandbox \"8bcf177240c81bb947783da60a5fcf55865cf9d6adc6d03ea99e7730ff526a55\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\""
	Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.766625844Z" level=info msg="StartContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\""
	Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.855901951Z" level=info msg="StartContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\" returns successfully"
	Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.889862492Z" level=info msg="shim disconnected" id=6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0 namespace=k8s.io
	Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.890498118Z" level=warning msg="cleaning up after shim disconnected" id=6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0 namespace=k8s.io
	Oct 26 01:34:36 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:36.890733330Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 26 01:34:37 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:37.420056486Z" level=info msg="RemoveContainer for \"00721c6a03267f4a57534c88faf6e9e2b4f542cf2c27f2cb95035072fe5fb762\""
	Oct 26 01:34:37 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:34:37.424757166Z" level=info msg="RemoveContainer for \"00721c6a03267f4a57534c88faf6e9e2b4f542cf2c27f2cb95035072fe5fb762\" returns successfully"
	Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.732326882Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.738884136Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.741048086Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 26 01:35:25 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:35:25.741115402Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.733707994Z" level=info msg="CreateContainer within sandbox \"8bcf177240c81bb947783da60a5fcf55865cf9d6adc6d03ea99e7730ff526a55\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.748868601Z" level=info msg="CreateContainer within sandbox \"8bcf177240c81bb947783da60a5fcf55865cf9d6adc6d03ea99e7730ff526a55\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01\""
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.749633017Z" level=info msg="StartContainer for \"6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01\""
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.827188140Z" level=info msg="StartContainer for \"6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01\" returns successfully"
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.854002429Z" level=info msg="shim disconnected" id=6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01 namespace=k8s.io
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.854218637Z" level=warning msg="cleaning up after shim disconnected" id=6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01 namespace=k8s.io
	Oct 26 01:36:02 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:02.854241513Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 26 01:36:03 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:03.664176345Z" level=info msg="RemoveContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\""
	Oct 26 01:36:03 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:36:03.669260768Z" level=info msg="RemoveContainer for \"6f166a2dbc701faad6f36d2ed4083fc16abce18f0bf0377e6f17a1a7feec8fc0\" returns successfully"
	Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.732357088Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.741192394Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.742802745Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 26 01:38:13 old-k8s-version-368787 containerd[566]: time="2024-10-26T01:38:13.742833736Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [3f79400ea7617aee7763ba5b150b19e9d341251e73898e6d2a63c4ad076c209e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42243 - 32852 "HINFO IN 4571955147938569355.6194879312205306998. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024066525s
	
	
	==> coredns [c8ce92c2bee0e4ca36c11aa64e264d0d783800fe7a5c3f410290301888db65a7] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54420 - 55680 "HINFO IN 2594235961846424401.5936807825121529914. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030929455s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-368787
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-368787
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=old-k8s-version-368787
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_26T01_29_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:29:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-368787
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:33:28 +0000   Sat, 26 Oct 2024 01:29:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:33:28 +0000   Sat, 26 Oct 2024 01:29:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:33:28 +0000   Sat, 26 Oct 2024 01:29:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:33:28 +0000   Sat, 26 Oct 2024 01:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-368787
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7d17f95dee04c9cb986384e241b3097
	  System UUID:                5c99fbfa-38dc-440d-b323-219a37c563dc
	  Boot ID:                    efe83352-e52f-4975-85ee-d7fbf692eb79
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 coredns-74ff55c5b-q7ksx                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m23s
	  kube-system                 etcd-old-k8s-version-368787                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m30s
	  kube-system                 kindnet-5vwks                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m23s
	  kube-system                 kube-apiserver-old-k8s-version-368787             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-controller-manager-old-k8s-version-368787    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-proxy-9q264                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-old-k8s-version-368787             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 metrics-server-9975d5f86-v2pwf                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-w4mwk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-zbljx               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m50s (x4 over 8m50s)  kubelet     Node old-k8s-version-368787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m50s (x4 over 8m50s)  kubelet     Node old-k8s-version-368787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m50s (x4 over 8m50s)  kubelet     Node old-k8s-version-368787 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m30s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m30s                  kubelet     Node old-k8s-version-368787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s                  kubelet     Node old-k8s-version-368787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s                  kubelet     Node old-k8s-version-368787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m23s                  kubelet     Node old-k8s-version-368787 status is now: NodeReady
	  Normal  Starting                 8m22s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-368787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x7 over 5m58s)  kubelet     Node old-k8s-version-368787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-368787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [19176bbdf5c5aec144585514f9dbfaf716de8e0fb0912af7399013b7b68b6272] <==
	raft2024/10/26 01:29:28 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/26 01:29:28 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/26 01:29:28 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/26 01:29:28 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-26 01:29:28.193583 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-26 01:29:28.194337 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-26 01:29:28.194399 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-26 01:29:28.194436 I | etcdserver: published {Name:old-k8s-version-368787 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-26 01:29:28.194452 I | embed: ready to serve client requests
	2024-10-26 01:29:28.196127 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-26 01:29:28.196275 I | embed: ready to serve client requests
	2024-10-26 01:29:28.197412 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-26 01:29:55.433106 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:29:57.534493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:30:07.534702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:30:17.534647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:30:27.534484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:30:37.534417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:30:47.534889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:30:57.534539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:31:07.534563 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:31:17.534863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:31:27.534462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:31:37.534440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:31:47.534733 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [3e88cb5ec2163e6c8a2d69c47e9a8e2369fa78e0674df66d908ec67ad1b18ace] <==
	2024-10-26 01:34:11.752656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:34:21.752888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:34:31.752646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:34:41.752594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:34:51.752723 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:35:01.752552 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:35:11.752632 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:35:21.752674 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:35:31.752473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:35:41.752455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:35:51.752585 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:36:01.752743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:36:11.752418 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:36:21.752651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:36:31.752450 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:36:41.752639 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:36:51.752593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:37:01.752528 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:37:11.752481 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:37:21.752649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:37:31.752511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:37:41.752402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:37:51.752594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:38:01.764595 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-26 01:38:11.752985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 01:38:17 up  9:20,  0 users,  load average: 3.23, 2.36, 2.48
	Linux old-k8s-version-368787 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [19f64e2c8ba4c2a239a69351b865d51f687e0d819d4f1cfebd5c199c2d56a48a] <==
	I1026 01:36:12.996223       1 main.go:300] handling current node
	I1026 01:36:23.007222       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:36:23.007270       1 main.go:300] handling current node
	I1026 01:36:32.996960       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:36:32.996995       1 main.go:300] handling current node
	I1026 01:36:43.004950       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:36:43.004996       1 main.go:300] handling current node
	I1026 01:36:53.008268       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:36:53.008557       1 main.go:300] handling current node
	I1026 01:37:02.996690       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:37:02.996730       1 main.go:300] handling current node
	I1026 01:37:13.003923       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:37:13.004029       1 main.go:300] handling current node
	I1026 01:37:23.007266       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:37:23.007306       1 main.go:300] handling current node
	I1026 01:37:32.996175       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:37:32.996399       1 main.go:300] handling current node
	I1026 01:37:43.008598       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:37:43.008641       1 main.go:300] handling current node
	I1026 01:37:53.005212       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:37:53.005265       1 main.go:300] handling current node
	I1026 01:38:03.004213       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:38:03.004266       1 main.go:300] handling current node
	I1026 01:38:13.009533       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:38:13.009571       1 main.go:300] handling current node
	
	
	==> kindnet [720cfd17791b3921f7c001eedbff9eabe588183eb98b3c17c9e15ae4193ee86b] <==
	I1026 01:29:58.113782       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1026 01:29:58.113812       1 metrics.go:61] Registering metrics
	I1026 01:29:58.113872       1 controller.go:378] Syncing nftables rules
	I1026 01:30:07.919672       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:30:07.919709       1 main.go:300] handling current node
	I1026 01:30:17.913086       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:30:17.913148       1 main.go:300] handling current node
	I1026 01:30:27.919497       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:30:27.919534       1 main.go:300] handling current node
	I1026 01:30:37.920906       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:30:37.920949       1 main.go:300] handling current node
	I1026 01:30:47.920337       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:30:47.920374       1 main.go:300] handling current node
	I1026 01:30:57.912928       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:30:57.912965       1 main.go:300] handling current node
	I1026 01:31:07.913556       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:31:07.913599       1 main.go:300] handling current node
	I1026 01:31:17.920933       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:31:17.920964       1 main.go:300] handling current node
	I1026 01:31:27.921894       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:31:27.921930       1 main.go:300] handling current node
	I1026 01:31:37.915508       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:31:37.915543       1 main.go:300] handling current node
	I1026 01:31:47.913023       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1026 01:31:47.913058       1 main.go:300] handling current node
	
	
	==> kube-apiserver [caf4499d19d569088060b42ff185c8cff3e175b5b056d516b11326fabb013bc9] <==
	I1026 01:34:51.838069       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:34:51.838079       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1026 01:35:25.916104       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:35:25.916146       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:35:25.916156       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1026 01:35:30.781040       1 handler_proxy.go:102] no RequestInfo found in the context
	E1026 01:35:30.781123       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 01:35:30.781134       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 01:36:05.183467       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:36:05.183512       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:36:05.183522       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1026 01:36:38.168035       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:36:38.168090       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:36:38.168100       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1026 01:37:18.580106       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:37:18.580226       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:37:18.580264       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1026 01:37:29.160914       1 handler_proxy.go:102] no RequestInfo found in the context
	E1026 01:37:29.161152       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 01:37:29.161266       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 01:37:57.599711       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:37:57.599761       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:37:57.599962       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [ee5aa1f2e06d37fc47d50d21895e543cfad7eccbde6db8e0d53a238b154ae36d] <==
	I1026 01:29:36.509850       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1026 01:29:36.509881       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1026 01:29:36.560776       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1026 01:29:36.565941       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1026 01:29:36.566800       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1026 01:29:37.039359       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:29:37.102200       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1026 01:29:37.241378       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 01:29:37.242513       1 controller.go:606] quota admission added evaluator for: endpoints
	I1026 01:29:37.246944       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:29:37.576378       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:29:38.243722       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1026 01:29:38.680435       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1026 01:29:38.738682       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1026 01:29:54.211914       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1026 01:29:54.412093       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:30:00.823354       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:30:00.823414       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:30:00.823424       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1026 01:30:32.056320       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:30:32.056367       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:30:32.056376       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1026 01:31:09.668522       1 client.go:360] parsed scheme: "passthrough"
	I1026 01:31:09.668573       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1026 01:31:09.668582       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [407cc3b1c2340484a389d1795695876b82d7fd2c69eef4104c4586805e14bcab] <==
	W1026 01:33:51.601617       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:34:19.088035       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:34:23.252028       1 request.go:655] Throttling request took 1.048418027s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W1026 01:34:24.103608       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:34:49.589897       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:34:55.754101       1 request.go:655] Throttling request took 1.048146679s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W1026 01:34:56.605519       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:35:20.092113       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:35:28.255930       1 request.go:655] Throttling request took 1.048164262s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W1026 01:35:29.107377       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:35:50.594034       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:36:00.807926       1 request.go:655] Throttling request took 1.048415146s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W1026 01:36:01.609313       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:36:21.096067       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:36:33.259744       1 request.go:655] Throttling request took 1.048192687s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1026 01:36:34.111365       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:36:51.597848       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:37:05.761740       1 request.go:655] Throttling request took 1.048147922s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1026 01:37:06.613115       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:37:22.100003       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:37:38.263851       1 request.go:655] Throttling request took 1.048427879s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1026 01:37:39.115306       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 01:37:52.602026       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1026 01:38:10.765714       1 request.go:655] Throttling request took 1.048288657s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1026 01:38:11.617322       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [5605b568cc91e1db4847dcdd18e1e9c02903cbad2ecc0786a4871410d408f526] <==
	I1026 01:29:54.414025       1 shared_informer.go:247] Caches are synced for taint 
	I1026 01:29:54.414353       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W1026 01:29:54.414529       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-368787. Assuming now as a timestamp.
	I1026 01:29:54.414709       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1026 01:29:54.415146       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1026 01:29:54.418083       1 event.go:291] "Event occurred" object="old-k8s-version-368787" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-368787 event: Registered Node old-k8s-version-368787 in Controller"
	I1026 01:29:54.430108       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I1026 01:29:54.431526       1 shared_informer.go:247] Caches are synced for attach detach 
	I1026 01:29:54.433327       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-368787" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1026 01:29:54.433470       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5vwks"
	I1026 01:29:54.439504       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9q264"
	I1026 01:29:54.439606       1 shared_informer.go:247] Caches are synced for persistent volume 
	E1026 01:29:54.498687       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4a4fba1f-e07a-4fa4-b69a-d21df0994c4b", ResourceVersion:"278", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865502979, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241007-36f62932\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b19f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b19fa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b19fc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b19fe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b58000), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b58020), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241007-36f62932", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b58040)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b58080)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b327e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001b3ad48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000aa3340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000eba0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001b3ad90)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1026 01:29:54.584137       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1026 01:29:54.853454       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1026 01:29:54.853520       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1026 01:29:54.884362       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1026 01:29:55.679004       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1026 01:29:55.715096       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ng789"
	I1026 01:29:59.414950       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1026 01:31:49.709426       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I1026 01:31:49.746855       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E1026 01:31:49.767804       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E1026 01:31:49.925268       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1026 01:31:50.902921       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-v2pwf"
	
	
	==> kube-proxy [79f5f9136e040504c1ccd26def0add28506e80fde10bb5fd004beda407501670] <==
	I1026 01:29:55.548819       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1026 01:29:55.549139       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1026 01:29:55.579247       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1026 01:29:55.579353       1 server_others.go:185] Using iptables Proxier.
	I1026 01:29:55.579586       1 server.go:650] Version: v1.20.0
	I1026 01:29:55.580083       1 config.go:315] Starting service config controller
	I1026 01:29:55.580109       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1026 01:29:55.589321       1 config.go:224] Starting endpoint slice config controller
	I1026 01:29:55.589350       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1026 01:29:55.685953       1 shared_informer.go:247] Caches are synced for service config 
	I1026 01:29:55.703436       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [f8701160de76e3035efa4b7981b51aa78fe29fed0b00c9e64d0e6ee36a1dcc52] <==
	I1026 01:32:30.513230       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1026 01:32:30.513302       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1026 01:32:30.544638       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1026 01:32:30.544738       1 server_others.go:185] Using iptables Proxier.
	I1026 01:32:30.544996       1 server.go:650] Version: v1.20.0
	I1026 01:32:30.545591       1 config.go:315] Starting service config controller
	I1026 01:32:30.545600       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1026 01:32:30.545617       1 config.go:224] Starting endpoint slice config controller
	I1026 01:32:30.545620       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1026 01:32:30.645738       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1026 01:32:30.645809       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [4cf9033bc9607eaafd5b665670535c078b1c85c54515459b47444929b86109d7] <==
	I1026 01:29:31.447015       1 serving.go:331] Generated self-signed cert in-memory
	W1026 01:29:35.728047       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 01:29:35.728321       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:29:35.728507       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 01:29:35.728626       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 01:29:35.839461       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1026 01:29:35.840057       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:29:35.840078       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:29:35.840095       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1026 01:29:35.856387       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 01:29:35.859371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 01:29:35.859925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 01:29:35.860033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 01:29:35.860114       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 01:29:35.860186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:29:35.860253       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 01:29:35.860326       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:29:35.860393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 01:29:35.860457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 01:29:35.860513       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 01:29:35.860615       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 01:29:36.790794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 01:29:36.837599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1026 01:29:37.140248       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9e91002c8dfb9e182dddc07a2fb6796674f120aae8d95e91cf40f39f059cf044] <==
	I1026 01:32:22.113409       1 serving.go:331] Generated self-signed cert in-memory
	W1026 01:32:28.059294       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 01:32:28.059359       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:32:28.059375       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 01:32:28.059381       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 01:32:28.243858       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1026 01:32:28.244485       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:32:28.244496       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:32:28.244557       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1026 01:32:28.345312       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 26 01:36:35 old-k8s-version-368787 kubelet[658]: E1026 01:36:35.736727     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: I1026 01:36:48.730843     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:36:48 old-k8s-version-368787 kubelet[658]: E1026 01:36:48.731231     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:36:49 old-k8s-version-368787 kubelet[658]: E1026 01:36:49.732052     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: I1026 01:37:02.730885     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:37:02 old-k8s-version-368787 kubelet[658]: E1026 01:37:02.731253     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:37:04 old-k8s-version-368787 kubelet[658]: E1026 01:37:04.731836     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: I1026 01:37:16.730981     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:37:16 old-k8s-version-368787 kubelet[658]: E1026 01:37:16.731416     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:37:17 old-k8s-version-368787 kubelet[658]: E1026 01:37:17.732038     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: I1026 01:37:28.730850     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:37:28 old-k8s-version-368787 kubelet[658]: E1026 01:37:28.731206     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:37:32 old-k8s-version-368787 kubelet[658]: E1026 01:37:32.731824     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: I1026 01:37:42.731103     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:37:42 old-k8s-version-368787 kubelet[658]: E1026 01:37:42.732241     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:37:47 old-k8s-version-368787 kubelet[658]: E1026 01:37:47.735074     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: I1026 01:37:54.730913     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:37:54 old-k8s-version-368787 kubelet[658]: E1026 01:37:54.732512     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:38:01 old-k8s-version-368787 kubelet[658]: E1026 01:38:01.735860     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 01:38:05 old-k8s-version-368787 kubelet[658]: I1026 01:38:05.730841     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: 6cd756151e892189bc98f86e9e74249242e40e9470b7360a95e864e6d63eed01
	Oct 26 01:38:05 old-k8s-version-368787 kubelet[658]: E1026 01:38:05.731200     658 pod_workers.go:191] Error syncing pod 311d8790-49ba-48f7-891e-5a6938f10dbb ("dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-w4mwk_kubernetes-dashboard(311d8790-49ba-48f7-891e-5a6938f10dbb)"
	Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743102     658 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743154     658 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743293     658 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-7tsjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1
b-255a-4898-917e-52f20c4e511f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 26 01:38:13 old-k8s-version-368787 kubelet[658]: E1026 01:38:13.743365     658 pod_workers.go:191] Error syncing pod fa7c7e1b-255a-4898-917e-52f20c4e511f ("metrics-server-9975d5f86-v2pwf_kube-system(fa7c7e1b-255a-4898-917e-52f20c4e511f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [ed8fe83be8b1e226ae7ddc31e41c28f4c6a711e76c27dff30507d604cd6b6125] <==
	2024/10/26 01:32:50 Starting overwatch
	2024/10/26 01:32:50 Using namespace: kubernetes-dashboard
	2024/10/26 01:32:50 Using in-cluster config to connect to apiserver
	2024/10/26 01:32:50 Using secret token for csrf signing
	2024/10/26 01:32:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/26 01:32:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/26 01:32:50 Successful initial request to the apiserver, version: v1.20.0
	2024/10/26 01:32:50 Generating JWE encryption key
	2024/10/26 01:32:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/26 01:32:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/26 01:32:50 Initializing JWE encryption key from synchronized object
	2024/10/26 01:32:50 Creating in-cluster Sidecar client
	2024/10/26 01:32:50 Serving insecurely on HTTP port: 9090
	2024/10/26 01:32:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:33:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:33:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:34:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:34:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:35:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:35:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/26 01:37:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3765e18684825aee82d76a7a38e7d5c11edfc8a3978c9822b2d5ca1908a3edad] <==
	I1026 01:29:57.881908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 01:29:57.904197       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 01:29:57.904243       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 01:29:57.919389       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 01:29:57.919849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_feb7fabf-98fb-48ab-8c01-82b3c62a2ef0!
	I1026 01:29:57.919480       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a95d0560-36d1-497f-a232-dbdd16032885", APIVersion:"v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-368787_feb7fabf-98fb-48ab-8c01-82b3c62a2ef0 became leader
	I1026 01:29:58.021067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_feb7fabf-98fb-48ab-8c01-82b3c62a2ef0!
	
	
	==> storage-provisioner [f4444a86e1f19d37e6fa95d2aa26a2d30fe3a574d5b0a2da6f1d4c3114df8adb] <==
	I1026 01:32:32.300209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 01:32:32.319088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 01:32:32.319306       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 01:32:49.868673       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 01:32:49.868845       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_921e834b-2258-46ac-9188-5c80455cd09d!
	I1026 01:32:49.869786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a95d0560-36d1-497f-a232-dbdd16032885", APIVersion:"v1", ResourceVersion:"789", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-368787_921e834b-2258-46ac-9188-5c80455cd09d became leader
	I1026 01:32:49.969347       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-368787_921e834b-2258-46ac-9188-5c80455cd09d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368787 -n old-k8s-version-368787
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-368787 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-v2pwf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-368787 describe pod metrics-server-9975d5f86-v2pwf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-368787 describe pod metrics-server-9975d5f86-v2pwf: exit status 1 (150.516053ms)

                                                
                                                
** stderr ** 
	E1026 01:38:19.296254 2087098 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1026 01:38:19.310966 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1026 01:38:19.320032 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1026 01:38:19.326676 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1026 01:38:19.337183 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E1026 01:38:19.341047 2087098 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	Error from server (NotFound): pods "metrics-server-9975d5f86-v2pwf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-368787 describe pod metrics-server-9975d5f86-v2pwf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.85s)

                                                
                                    

Test pass (300/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.95
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 8.18
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.28
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 217.72
29 TestAddons/serial/Volcano 39.93
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.89
35 TestAddons/parallel/Registry 17.4
36 TestAddons/parallel/Ingress 19.52
37 TestAddons/parallel/InspektorGadget 10.78
38 TestAddons/parallel/MetricsServer 6.84
40 TestAddons/parallel/CSI 57.02
41 TestAddons/parallel/Headlamp 17.52
42 TestAddons/parallel/CloudSpanner 6.77
43 TestAddons/parallel/LocalPath 52.18
44 TestAddons/parallel/NvidiaDevicePlugin 6.78
45 TestAddons/parallel/Yakd 11.86
47 TestAddons/StoppedEnableDisable 12.31
48 TestCertOptions 38.52
49 TestCertExpiration 226.82
51 TestForceSystemdFlag 42.57
52 TestForceSystemdEnv 42
53 TestDockerEnvContainerd 43.56
58 TestErrorSpam/setup 30.16
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.06
61 TestErrorSpam/pause 1.94
62 TestErrorSpam/unpause 1.9
63 TestErrorSpam/stop 12.31
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.92
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.19
70 TestFunctional/serial/KubeContext 0.1
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.09
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 43.63
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.76
86 TestFunctional/serial/LogsFileCmd 1.73
87 TestFunctional/serial/InvalidService 4.6
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 7.61
91 TestFunctional/parallel/DryRun 0.42
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.19
97 TestFunctional/parallel/ServiceCmdConnect 10.62
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 28.13
101 TestFunctional/parallel/SSHCmd 0.65
102 TestFunctional/parallel/CpCmd 2.32
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.58
109 TestFunctional/parallel/NodeLabels 0.34
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
113 TestFunctional/parallel/License 0.46
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
126 TestFunctional/parallel/ServiceCmd/List 0.51
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
129 TestFunctional/parallel/ServiceCmd/Format 0.38
130 TestFunctional/parallel/ServiceCmd/URL 0.38
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
132 TestFunctional/parallel/ProfileCmd/profile_list 0.53
133 TestFunctional/parallel/MountCmd/any-port 7.86
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.74
135 TestFunctional/parallel/MountCmd/specific-port 2.21
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.32
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.33
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
144 TestFunctional/parallel/ImageCommands/Setup 0.79
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.5
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.7
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 133.39
162 TestMultiControlPlane/serial/DeployApp 33.03
163 TestMultiControlPlane/serial/PingHostFromPods 2.05
164 TestMultiControlPlane/serial/AddWorkerNode 21.96
165 TestMultiControlPlane/serial/NodeLabels 0.11
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
167 TestMultiControlPlane/serial/CopyFile 19.45
168 TestMultiControlPlane/serial/StopSecondaryNode 13.05
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
170 TestMultiControlPlane/serial/RestartSecondaryNode 31.06
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 136.19
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.67
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
175 TestMultiControlPlane/serial/StopCluster 36.1
176 TestMultiControlPlane/serial/RestartCluster 43.2
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
178 TestMultiControlPlane/serial/AddSecondaryNode 42.64
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.56
183 TestJSONOutput/start/Command 51.22
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 1.11
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.77
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
208 TestKicCustomNetwork/create_custom_network 40.01
209 TestKicCustomNetwork/use_default_bridge_network 34.93
210 TestKicExistingNetwork 33.42
211 TestKicCustomSubnet 33.39
212 TestKicStaticIP 33.85
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 69.71
217 TestMountStart/serial/StartWithMountFirst 6.14
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 7.54
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.64
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.46
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 105.44
229 TestMultiNode/serial/DeployApp2Nodes 19.17
230 TestMultiNode/serial/PingHostFrom2Pods 1.02
231 TestMultiNode/serial/AddNode 17.93
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.75
234 TestMultiNode/serial/CopyFile 10.23
235 TestMultiNode/serial/StopNode 2.29
236 TestMultiNode/serial/StartAfterStop 9.95
237 TestMultiNode/serial/RestartKeepsNodes 103.78
238 TestMultiNode/serial/DeleteNode 5.63
239 TestMultiNode/serial/StopMultiNode 24.14
240 TestMultiNode/serial/RestartMultiNode 56.73
241 TestMultiNode/serial/ValidateNameConflict 36.52
246 TestPreload 113.47
248 TestScheduledStopUnix 105.95
251 TestInsufficientStorage 10.48
252 TestRunningBinaryUpgrade 74.02
254 TestKubernetesUpgrade 345.67
255 TestMissingContainerUpgrade 181.01
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestNoKubernetes/serial/StartWithK8s 41.49
259 TestNoKubernetes/serial/StartWithStopK8s 18.53
260 TestNoKubernetes/serial/Start 7.97
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
262 TestNoKubernetes/serial/ProfileList 0.96
263 TestNoKubernetes/serial/Stop 1.23
264 TestNoKubernetes/serial/StartNoArgs 7.25
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
266 TestStoppedBinaryUpgrade/Setup 0.96
267 TestStoppedBinaryUpgrade/Upgrade 91.52
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
277 TestPause/serial/Start 50.77
278 TestPause/serial/SecondStartNoReconfiguration 7.53
279 TestPause/serial/Pause 1.06
280 TestPause/serial/VerifyStatus 0.47
281 TestPause/serial/Unpause 0.9
282 TestPause/serial/PauseAgain 1.15
283 TestPause/serial/DeletePaused 3.14
284 TestPause/serial/VerifyDeletedResources 0.51
292 TestNetworkPlugins/group/false 4.74
297 TestStartStop/group/old-k8s-version/serial/FirstStart 164.57
298 TestStartStop/group/old-k8s-version/serial/DeployApp 10.87
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.58
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.8
302 TestStartStop/group/old-k8s-version/serial/Stop 12.55
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.38
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.13
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
315 TestStartStop/group/embed-certs/serial/FirstStart 82.84
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/old-k8s-version/serial/Pause 3.06
321 TestStartStop/group/no-preload/serial/FirstStart 60.55
322 TestStartStop/group/embed-certs/serial/DeployApp 10.48
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.48
324 TestStartStop/group/embed-certs/serial/Stop 12.51
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
326 TestStartStop/group/embed-certs/serial/SecondStart 267.31
327 TestStartStop/group/no-preload/serial/DeployApp 8.46
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
329 TestStartStop/group/no-preload/serial/Stop 12.26
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/no-preload/serial/SecondStart 269.41
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.03
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
335 TestStartStop/group/embed-certs/serial/Pause 3.02
337 TestStartStop/group/newest-cni/serial/FirstStart 39.27
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.16
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
341 TestStartStop/group/no-preload/serial/Pause 4.29
342 TestNetworkPlugins/group/auto/Start 86.71
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.66
345 TestStartStop/group/newest-cni/serial/Stop 3.11
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
347 TestStartStop/group/newest-cni/serial/SecondStart 24.37
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
351 TestStartStop/group/newest-cni/serial/Pause 4.09
352 TestNetworkPlugins/group/kindnet/Start 83.9
353 TestNetworkPlugins/group/auto/KubeletFlags 0.31
354 TestNetworkPlugins/group/auto/NetCatPod 9.33
355 TestNetworkPlugins/group/auto/DNS 0.19
356 TestNetworkPlugins/group/auto/Localhost 0.16
357 TestNetworkPlugins/group/auto/HairPin 0.17
358 TestNetworkPlugins/group/calico/Start 68.17
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
361 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
362 TestNetworkPlugins/group/kindnet/DNS 0.3
363 TestNetworkPlugins/group/kindnet/Localhost 0.37
364 TestNetworkPlugins/group/kindnet/HairPin 0.29
365 TestNetworkPlugins/group/custom-flannel/Start 52.42
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.44
368 TestNetworkPlugins/group/calico/NetCatPod 11.41
369 TestNetworkPlugins/group/calico/DNS 0.33
370 TestNetworkPlugins/group/calico/Localhost 0.3
371 TestNetworkPlugins/group/calico/HairPin 0.22
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.36
374 TestNetworkPlugins/group/custom-flannel/DNS 0.19
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
377 TestNetworkPlugins/group/enable-default-cni/Start 85.79
378 TestNetworkPlugins/group/flannel/Start 54.52
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
382 TestNetworkPlugins/group/flannel/NetCatPod 11.35
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
387 TestNetworkPlugins/group/flannel/DNS 0.35
388 TestNetworkPlugins/group/flannel/Localhost 0.18
389 TestNetworkPlugins/group/flannel/HairPin 0.17
390 TestNetworkPlugins/group/bridge/Start 72.42
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
392 TestNetworkPlugins/group/bridge/NetCatPod 8.29
393 TestNetworkPlugins/group/bridge/DNS 0.19
394 TestNetworkPlugins/group/bridge/Localhost 0.16
395 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (6.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-635681 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-635681 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.952413851s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1026 00:43:21.267518 1864373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1026 00:43:21.267597 1864373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-635681
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-635681: exit status 85 (73.988362ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-635681 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |          |
	|         | -p download-only-635681        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:43:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:43:14.364201 1864378 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:43:14.364402 1864378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:14.364429 1864378 out.go:358] Setting ErrFile to fd 2...
	I1026 00:43:14.364448 1864378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:14.364739 1864378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	W1026 00:43:14.364909 1864378 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19868-1857747/.minikube/config/config.json: open /home/jenkins/minikube-integration/19868-1857747/.minikube/config/config.json: no such file or directory
	I1026 00:43:14.365371 1864378 out.go:352] Setting JSON to true
	I1026 00:43:14.366234 1864378 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30345,"bootTime":1729873050,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 00:43:14.366331 1864378 start.go:139] virtualization:  
	I1026 00:43:14.369083 1864378 out.go:97] [download-only-635681] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1026 00:43:14.369314 1864378 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 00:43:14.369372 1864378 notify.go:220] Checking for updates...
	I1026 00:43:14.371596 1864378 out.go:169] MINIKUBE_LOCATION=19868
	I1026 00:43:14.373399 1864378 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:43:14.375141 1864378 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 00:43:14.376941 1864378 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 00:43:14.378676 1864378 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1026 00:43:14.382320 1864378 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 00:43:14.382606 1864378 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:43:14.413007 1864378 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 00:43:14.413129 1864378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:43:14.461034 1864378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 00:43:14.450835265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:43:14.461163 1864378 docker.go:318] overlay module found
	I1026 00:43:14.463172 1864378 out.go:97] Using the docker driver based on user configuration
	I1026 00:43:14.463207 1864378 start.go:297] selected driver: docker
	I1026 00:43:14.463216 1864378 start.go:901] validating driver "docker" against <nil>
	I1026 00:43:14.463435 1864378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:43:14.517777 1864378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 00:43:14.507847727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:43:14.517994 1864378 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:43:14.518301 1864378 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1026 00:43:14.518462 1864378 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 00:43:14.520493 1864378 out.go:169] Using Docker driver with root privileges
	I1026 00:43:14.522263 1864378 cni.go:84] Creating CNI manager for ""
	I1026 00:43:14.522335 1864378 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 00:43:14.522349 1864378 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:43:14.522427 1864378 start.go:340] cluster config:
	{Name:download-only-635681 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-635681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:43:14.524378 1864378 out.go:97] Starting "download-only-635681" primary control-plane node in "download-only-635681" cluster
	I1026 00:43:14.524417 1864378 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1026 00:43:14.526303 1864378 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1026 00:43:14.526328 1864378 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1026 00:43:14.526434 1864378 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1026 00:43:14.542126 1864378 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1026 00:43:14.542336 1864378 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1026 00:43:14.542434 1864378 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1026 00:43:14.600507 1864378 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1026 00:43:14.600538 1864378 cache.go:56] Caching tarball of preloaded images
	I1026 00:43:14.600702 1864378 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1026 00:43:14.602903 1864378 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1026 00:43:14.602928 1864378 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1026 00:43:14.686046 1864378 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-635681 host does not exist
	  To start a cluster, run: "minikube start -p download-only-635681"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-635681
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (8.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-110686 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-110686 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.183081075s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (8.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1026 00:43:29.879562 1864373 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1026 00:43:29.879602 1864373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-110686
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-110686: exit status 85 (72.378668ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-635681 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | -p download-only-635681        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| delete  | -p download-only-635681        | download-only-635681 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC | 26 Oct 24 00:43 UTC |
	| start   | -o=json --download-only        | download-only-110686 | jenkins | v1.34.0 | 26 Oct 24 00:43 UTC |                     |
	|         | -p download-only-110686        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/26 00:43:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 00:43:21.751093 1864582 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:43:21.751296 1864582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:21.751307 1864582 out.go:358] Setting ErrFile to fd 2...
	I1026 00:43:21.751556 1864582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:43:21.751879 1864582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 00:43:21.752404 1864582 out.go:352] Setting JSON to true
	I1026 00:43:21.753324 1864582 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30352,"bootTime":1729873050,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 00:43:21.753405 1864582 start.go:139] virtualization:  
	I1026 00:43:21.755816 1864582 out.go:97] [download-only-110686] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1026 00:43:21.756007 1864582 notify.go:220] Checking for updates...
	I1026 00:43:21.758016 1864582 out.go:169] MINIKUBE_LOCATION=19868
	I1026 00:43:21.760020 1864582 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:43:21.761948 1864582 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 00:43:21.764090 1864582 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 00:43:21.765907 1864582 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1026 00:43:21.769623 1864582 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 00:43:21.769866 1864582 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:43:21.790059 1864582 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 00:43:21.790186 1864582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:43:21.848402 1864582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-26 00:43:21.838985627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:43:21.848516 1864582 docker.go:318] overlay module found
	I1026 00:43:21.850437 1864582 out.go:97] Using the docker driver based on user configuration
	I1026 00:43:21.850471 1864582 start.go:297] selected driver: docker
	I1026 00:43:21.850480 1864582 start.go:901] validating driver "docker" against <nil>
	I1026 00:43:21.850586 1864582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:43:21.905826 1864582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-26 00:43:21.895236332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:43:21.906057 1864582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1026 00:43:21.906386 1864582 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1026 00:43:21.906558 1864582 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 00:43:21.908846 1864582 out.go:169] Using Docker driver with root privileges
	I1026 00:43:21.910679 1864582 cni.go:84] Creating CNI manager for ""
	I1026 00:43:21.910771 1864582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1026 00:43:21.910785 1864582 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 00:43:21.910887 1864582 start.go:340] cluster config:
	{Name:download-only-110686 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-110686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:43:21.912967 1864582 out.go:97] Starting "download-only-110686" primary control-plane node in "download-only-110686" cluster
	I1026 00:43:21.913013 1864582 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1026 00:43:21.914888 1864582 out.go:97] Pulling base image v0.0.45-1729876044-19868 ...
	I1026 00:43:21.914975 1864582 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1026 00:43:21.915067 1864582 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
	I1026 00:43:21.930285 1864582 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e to local cache
	I1026 00:43:21.930430 1864582 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory
	I1026 00:43:21.930448 1864582 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local cache directory, skipping pull
	I1026 00:43:21.930458 1864582 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in cache, skipping pull
	I1026 00:43:21.930465 1864582 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e as a tarball
	I1026 00:43:21.972118 1864582 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	I1026 00:43:21.972147 1864582 cache.go:56] Caching tarball of preloaded images
	I1026 00:43:21.972968 1864582 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
	I1026 00:43:21.974957 1864582 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1026 00:43:21.974982 1864582 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 ...
	I1026 00:43:22.064828 1864582 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:5a1c96cd03f848c5b0e8fb66f315acd5 -> /home/jenkins/minikube-integration/19868-1857747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-110686 host does not exist
	  To start a cluster, run: "minikube start -p download-only-110686"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-110686
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 00:43:31.276576 1864373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-773195 --alsologtostderr --binary-mirror http://127.0.0.1:36119 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-773195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-773195
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-701091
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-701091: exit status 85 (86.685776ms)

                                                
                                                
-- stdout --
	* Profile "addons-701091" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-701091"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-701091
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-701091: exit status 85 (82.13212ms)

                                                
                                                
-- stdout --
	* Profile "addons-701091" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-701091"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (217.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-701091 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-701091 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m37.723067745s)
--- PASS: TestAddons/Setup (217.72s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 45.247064ms
addons_test.go:815: volcano-admission stabilized in 45.715337ms
addons_test.go:823: volcano-controller stabilized in 46.364625ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-l52n9" [13f4db27-e8a8-4fb4-b99b-11029aa347c0] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003759222s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-98s2n" [8908137f-c324-4c00-91e9-4c16fb5101bc] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003547548s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-wj9sw" [a53e2197-30a2-4541-b2cc-e0800a8953cd] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003904574s
addons_test.go:842: (dbg) Run:  kubectl --context addons-701091 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-701091 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-701091 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [267c042b-38ef-4468-8ee9-aedad4936219] Pending
helpers_test.go:344: "test-job-nginx-0" [267c042b-38ef-4468-8ee9-aedad4936219] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [267c042b-38ef-4468-8ee9-aedad4936219] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00438112s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable volcano --alsologtostderr -v=1: (11.33787698s)
--- PASS: TestAddons/serial/Volcano (39.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-701091 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-701091 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-701091 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-701091 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dc98b3d9-7db5-4f81-8eef-8d986b25490f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dc98b3d9-7db5-4f81-8eef-8d986b25490f] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003955599s
addons_test.go:633: (dbg) Run:  kubectl --context addons-701091 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-701091 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-701091 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-701091 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.846772ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2rgkp" [9af4b3e8-3928-4529-8107-d7160e8884d0] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008161351s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vx92m" [122d5ca3-471a-4fb7-85f8-5ed78250a4b0] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003698461s
addons_test.go:331: (dbg) Run:  kubectl --context addons-701091 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-701091 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-701091 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.405033958s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 ip
2024/10/26 00:48:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-701091 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-701091 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-701091 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0952c88a-79c5-40eb-a430-0da6ae62c8f3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0952c88a-79c5-40eb-a430-0da6ae62c8f3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003271494s
I1026 00:49:43.663754 1864373 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-701091 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable ingress-dns --alsologtostderr -v=1: (2.023278198s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable ingress --alsologtostderr -v=1: (7.780102894s)
--- PASS: TestAddons/parallel/Ingress (19.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4swpn" [92214552-0978-4e2e-8c49-ca0b43ad9be5] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004208034s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable inspektor-gadget --alsologtostderr -v=1: (5.772059353s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.684846ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-g6kfp" [e9a3f78b-98b3-47eb-994b-2d1bb4b0fc10] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003826672s
addons_test.go:402: (dbg) Run:  kubectl --context addons-701091 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 00:48:50.727703 1864373 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 00:48:50.733081 1864373 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 00:48:50.733123 1864373 kapi.go:107] duration metric: took 7.880312ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.892365ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-701091 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-701091 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0ebb8b8f-3339-4ef5-a5c9-573682cc2470] Pending
helpers_test.go:344: "task-pv-pod" [0ebb8b8f-3339-4ef5-a5c9-573682cc2470] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0ebb8b8f-3339-4ef5-a5c9-573682cc2470] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.005012059s
addons_test.go:511: (dbg) Run:  kubectl --context addons-701091 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-701091 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-701091 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-701091 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-701091 delete pod task-pv-pod: (1.147203484s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-701091 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-701091 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-701091 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cfbcc3d8-ab25-485f-b40d-28bb6ce8759a] Pending
helpers_test.go:344: "task-pv-pod-restore" [cfbcc3d8-ab25-485f-b40d-28bb6ce8759a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cfbcc3d8-ab25-485f-b40d-28bb6ce8759a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003847818s
addons_test.go:553: (dbg) Run:  kubectl --context addons-701091 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-701091 delete pod task-pv-pod-restore: (1.265027367s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-701091 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-701091 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.847339326s)
--- PASS: TestAddons/parallel/CSI (57.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-701091 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-701091 --alsologtostderr -v=1: (1.569945311s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-gnhfx" [6253f745-5416-41d5-b5bc-bae96183760d] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-gnhfx" [6253f745-5416-41d5-b5bc-bae96183760d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-gnhfx" [6253f745-5416-41d5-b5bc-bae96183760d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004934482s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable headlamp --alsologtostderr -v=1: (5.939217087s)
--- PASS: TestAddons/parallel/Headlamp (17.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-chdg8" [4c4054d7-1b62-4b28-a6d1-4fabc6a00787] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004016819s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-701091 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-701091 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701091 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [994b3813-ba76-488d-87ed-c8dd70053dd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [994b3813-ba76-488d-87ed-c8dd70053dd7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [994b3813-ba76-488d-87ed-c8dd70053dd7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00428438s
addons_test.go:906: (dbg) Run:  kubectl --context addons-701091 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 ssh "cat /opt/local-path-provisioner/pvc-b8a1d1c7-84a1-4bf3-88b8-be5d56a8f2c1_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-701091 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-701091 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.657258938s)
--- PASS: TestAddons/parallel/LocalPath (52.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-64g5w" [4016a1e2-a8e3-4a61-a78e-4c097d1dc1c4] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008011542s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.78s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pm2lt" [5fffa616-91e9-4df6-8126-bec62cca4916] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003455545s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-701091 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-701091 addons disable yakd --alsologtostderr -v=1: (5.852072204s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-701091
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-701091: (12.030536782s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-701091
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-701091
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-701091
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (38.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-712326 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-712326 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.815741353s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-712326 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-712326 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-712326 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-712326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-712326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-712326: (2.018426499s)
--- PASS: TestCertOptions (38.52s)

                                                
                                    
x
+
TestCertExpiration (226.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-335477 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-335477 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.110338303s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-335477 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-335477 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.259373863s)
helpers_test.go:175: Cleaning up "cert-expiration-335477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-335477
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-335477: (2.448239682s)
--- PASS: TestCertExpiration (226.82s)

                                                
                                    
x
+
TestForceSystemdFlag (42.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-412022 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-412022 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.807980837s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-412022 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-412022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-412022
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-412022: (2.313958052s)
--- PASS: TestForceSystemdFlag (42.57s)

                                                
                                    
x
+
TestForceSystemdEnv (42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-968413 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-968413 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.402469433s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-968413 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-968413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-968413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-968413: (2.216709879s)
--- PASS: TestForceSystemdEnv (42.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.56s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-926535 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-926535 --driver=docker  --container-runtime=containerd: (27.903739539s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-926535"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-926535": (1.035964817s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hJKdMr4EPsot/agent.1885491" SSH_AGENT_PID="1885492" DOCKER_HOST=ssh://docker@127.0.0.1:35013 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hJKdMr4EPsot/agent.1885491" SSH_AGENT_PID="1885492" DOCKER_HOST=ssh://docker@127.0.0.1:35013 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hJKdMr4EPsot/agent.1885491" SSH_AGENT_PID="1885492" DOCKER_HOST=ssh://docker@127.0.0.1:35013 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.184338403s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hJKdMr4EPsot/agent.1885491" SSH_AGENT_PID="1885492" DOCKER_HOST=ssh://docker@127.0.0.1:35013 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-926535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-926535
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-926535: (2.005215524s)
--- PASS: TestDockerEnvContainerd (43.56s)

                                                
                                    
x
+
TestErrorSpam/setup (30.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-608559 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-608559 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-608559 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-608559 --driver=docker  --container-runtime=containerd: (30.156346321s)
--- PASS: TestErrorSpam/setup (30.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (12.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 stop: (12.116518756s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-608559 --log_dir /tmp/nospam-608559 stop
--- PASS: TestErrorSpam/stop (12.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19868-1857747/.minikube/files/etc/test/nested/copy/1864373/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-469870 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1026 00:52:09.673461 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:09.679822 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:09.691230 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:09.712676 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:09.754040 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:09.835430 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:09.996779 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:10.318378 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:10.960330 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:12.241617 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:14.802946 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:19.924201 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:30.165541 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:52:50.647670 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-469870 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.921250507s)
--- PASS: TestFunctional/serial/StartWithProxy (78.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 00:53:08.817095 1864373 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-469870 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-469870 --alsologtostderr -v=8: (6.189170053s)
functional_test.go:663: soft start took 6.192959501s for "functional-469870" cluster.
I1026 00:53:15.006612 1864373 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-469870 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 cache add registry.k8s.io/pause:3.1: (1.515144575s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 cache add registry.k8s.io/pause:3.3: (1.358915908s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 cache add registry.k8s.io/pause:latest: (1.218102403s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-469870 /tmp/TestFunctionalserialCacheCmdcacheadd_local4155606786/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cache add minikube-local-cache-test:functional-469870
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cache delete minikube-local-cache-test:functional-469870
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-469870
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.850265ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 cache reload: (1.044638798s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 kubectl -- --context functional-469870 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-469870 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-469870 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1026 00:53:31.609340 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-469870 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.633260724s)
functional_test.go:761: restart took 43.63336731s for "functional-469870" cluster.
I1026 00:54:07.084377 1864373 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (43.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-469870 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 logs: (1.764399598s)
--- PASS: TestFunctional/serial/LogsCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 logs --file /tmp/TestFunctionalserialLogsFileCmd840260011/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 logs --file /tmp/TestFunctionalserialLogsFileCmd840260011/001/logs.txt: (1.730904236s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-469870 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-469870
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-469870: exit status 115 (627.625423ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31518 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-469870 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 config get cpus: exit status 14 (85.433001ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 config get cpus: exit status 14 (92.829995ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-469870 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-469870 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1900463: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-469870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-469870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (184.958192ms)

                                                
                                                
-- stdout --
	* [functional-469870] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 00:54:48.113015 1900200 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:54:48.113396 1900200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:54:48.113406 1900200 out.go:358] Setting ErrFile to fd 2...
	I1026 00:54:48.113412 1900200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:54:48.113682 1900200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 00:54:48.114075 1900200 out.go:352] Setting JSON to false
	I1026 00:54:48.115089 1900200 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31039,"bootTime":1729873050,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 00:54:48.115161 1900200 start.go:139] virtualization:  
	I1026 00:54:48.117739 1900200 out.go:177] * [functional-469870] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1026 00:54:48.120394 1900200 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:54:48.122116 1900200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:54:48.123103 1900200 notify.go:220] Checking for updates...
	I1026 00:54:48.126010 1900200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 00:54:48.128255 1900200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 00:54:48.130037 1900200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 00:54:48.131942 1900200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:54:48.134296 1900200 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 00:54:48.134934 1900200 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:54:48.157172 1900200 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 00:54:48.157306 1900200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:54:48.216278 1900200 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 00:54:48.205536702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:54:48.216437 1900200 docker.go:318] overlay module found
	I1026 00:54:48.218480 1900200 out.go:177] * Using the docker driver based on existing profile
	I1026 00:54:48.220176 1900200 start.go:297] selected driver: docker
	I1026 00:54:48.220195 1900200 start.go:901] validating driver "docker" against &{Name:functional-469870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-469870 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:54:48.220314 1900200 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:54:48.222643 1900200 out.go:201] 
	W1026 00:54:48.224400 1900200 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 00:54:48.226391 1900200 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-469870 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-469870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-469870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (252.277066ms)

                                                
                                                
-- stdout --
	* [functional-469870] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 00:54:47.854261 1900098 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:54:47.854469 1900098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:54:47.854501 1900098 out.go:358] Setting ErrFile to fd 2...
	I1026 00:54:47.854523 1900098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:54:47.855530 1900098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 00:54:47.856029 1900098 out.go:352] Setting JSON to false
	I1026 00:54:47.857039 1900098 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31038,"bootTime":1729873050,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 00:54:47.857157 1900098 start.go:139] virtualization:  
	I1026 00:54:47.861313 1900098 out.go:177] * [functional-469870] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1026 00:54:47.863275 1900098 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 00:54:47.863369 1900098 notify.go:220] Checking for updates...
	I1026 00:54:47.867960 1900098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 00:54:47.869994 1900098 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 00:54:47.872615 1900098 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 00:54:47.874570 1900098 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 00:54:47.876616 1900098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 00:54:47.879358 1900098 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 00:54:47.879911 1900098 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 00:54:47.925196 1900098 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 00:54:47.925351 1900098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:54:48.018855 1900098 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 00:54:48.006452553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:54:48.018977 1900098 docker.go:318] overlay module found
	I1026 00:54:48.022018 1900098 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1026 00:54:48.024042 1900098 start.go:297] selected driver: docker
	I1026 00:54:48.024074 1900098 start.go:901] validating driver "docker" against &{Name:functional-469870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-469870 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 00:54:48.024195 1900098 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 00:54:48.027013 1900098 out.go:201] 
	W1026 00:54:48.029165 1900098 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 00:54:48.031943 1900098 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-469870 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-469870 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-zb6gs" [77db86b4-35a2-4a63-bcf8-c3cce44656a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-zb6gs" [77db86b4-35a2-4a63-bcf8-c3cce44656a5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004465178s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30475
functional_test.go:1675: http://192.168.49.2:30475: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-zb6gs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30475
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9a8c0f46-8c97-4f83-94d8-c45f444de951] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005018104s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-469870 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-469870 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-469870 get pvc myclaim -o=json
I1026 00:54:22.942840 1864373 retry.go:31] will retry after 1.847081686s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3efd6f76-40ac-4226-b4d3-7da2cec6bd30 ResourceVersion:604 Generation:0 CreationTimestamp:2024-10-26 00:54:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40004f7940 VolumeMode:0x40004f7980 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-469870 get pvc myclaim -o=json
I1026 00:54:24.866061 1864373 retry.go:31] will retry after 2.832557659s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:3efd6f76-40ac-4226-b4d3-7da2cec6bd30 ResourceVersion:604 Generation:0 CreationTimestamp:2024-10-26 00:54:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40004fb310 VolumeMode:0x40004fb350 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-469870 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-469870 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ef2cc1e-0ff8-4e5c-a24f-5194073e6ead] Pending
helpers_test.go:344: "sp-pod" [2ef2cc1e-0ff8-4e5c-a24f-5194073e6ead] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ef2cc1e-0ff8-4e5c-a24f-5194073e6ead] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.005715944s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-469870 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-469870 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-469870 delete -f testdata/storage-provisioner/pod.yaml: (1.190476048s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-469870 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [03bde0ab-a399-454a-b6e2-9b3ced0eccad] Pending
helpers_test.go:344: "sp-pod" [03bde0ab-a399-454a-b6e2-9b3ced0eccad] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004001609s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-469870 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh -n functional-469870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cp functional-469870:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd908529425/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh -n functional-469870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh -n functional-469870 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1864373/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /etc/test/nested/copy/1864373/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1864373.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /etc/ssl/certs/1864373.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1864373.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /usr/share/ca-certificates/1864373.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/18643732.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /etc/ssl/certs/18643732.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/18643732.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /usr/share/ca-certificates/18643732.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-469870 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh "sudo systemctl is-active docker": exit status 1 (315.019725ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh "sudo systemctl is-active crio": exit status 1 (369.855427ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-469870 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-469870 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-469870 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1897846: os: process already finished
helpers_test.go:502: unable to terminate pid 1897662: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-469870 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-469870 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-469870 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d58c14a7-8be8-4fbb-af58-e6476747ecbb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d58c14a7-8be8-4fbb-af58-e6476747ecbb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004026438s
I1026 00:54:25.480343 1864373 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-469870 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.216.136 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-469870 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-469870 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-469870 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-kvckc" [957cd3d8-d976-45cb-ac53-3124f014cd9c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-kvckc" [957cd3d8-d976-45cb-ac53-3124f014cd9c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00399501s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 service list -o json
functional_test.go:1494: Took "497.023289ms" to run "out/minikube-linux-arm64 -p functional-469870 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30570
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30570
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "451.040481ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "76.479787ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdany-port1636384618/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1729904085725123733" to /tmp/TestFunctionalparallelMountCmdany-port1636384618/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1729904085725123733" to /tmp/TestFunctionalparallelMountCmdany-port1636384618/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1729904085725123733" to /tmp/TestFunctionalparallelMountCmdany-port1636384618/001/test-1729904085725123733
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 00:54 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 00:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 00:54 test-1729904085725123733
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh cat /mount-9p/test-1729904085725123733
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-469870 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [65ecd686-87a4-4304-953f-18e4287ef369] Pending
helpers_test.go:344: "busybox-mount" [65ecd686-87a4-4304-953f-18e4287ef369] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [65ecd686-87a4-4304-953f-18e4287ef369] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [65ecd686-87a4-4304-953f-18e4287ef369] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003869755s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-469870 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdany-port1636384618/001:/mount-9p --alsologtostderr -v=1] ...
E1026 00:54:53.531226 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "654.594891ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "81.512073ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdspecific-port1921948602/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (593.156091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 00:54:54.173695 1864373 retry.go:31] will retry after 282.84423ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdspecific-port1921948602/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh "sudo umount -f /mount-9p": exit status 1 (368.236904ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-469870 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdspecific-port1921948602/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2477201402/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2477201402/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2477201402/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T" /mount1
2024/10/26 00:54:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T" /mount1: exit status 1 (1.031730655s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 00:54:56.832073 1864373 retry.go:31] will retry after 294.081981ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-469870 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2477201402/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2477201402/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-469870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2477201402/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 version -o=json --components: (1.332089381s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-469870 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-469870
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kicbase/echo-server:functional-469870
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-469870 image ls --format short --alsologtostderr:
I1026 00:55:05.182120 1903062 out.go:345] Setting OutFile to fd 1 ...
I1026 00:55:05.182436 1903062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.182473 1903062 out.go:358] Setting ErrFile to fd 2...
I1026 00:55:05.182494 1903062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.182768 1903062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 00:55:05.183492 1903062 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.183661 1903062 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.184181 1903062 cli_runner.go:164] Run: docker container inspect functional-469870 --format={{.State.Status}}
I1026 00:55:05.210514 1903062 ssh_runner.go:195] Run: systemctl --version
I1026 00:55:05.210560 1903062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-469870
I1026 00:55:05.235425 1903062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/functional-469870/id_rsa Username:docker}
I1026 00:55:05.324641 1903062 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-469870 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2            | sha256:9404ae | 23.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.2            | sha256:d6b061 | 18.4MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| docker.io/library/nginx                     | latest             | sha256:4b1965 | 69.6MB |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| docker.io/kicbase/echo-server               | functional-469870  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-469870  | sha256:b64025 | 991B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.2            | sha256:f9c264 | 25.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.2            | sha256:021d24 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-469870 image ls --format table --alsologtostderr:
I1026 00:55:05.474899 1903127 out.go:345] Setting OutFile to fd 1 ...
I1026 00:55:05.475482 1903127 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.475518 1903127 out.go:358] Setting ErrFile to fd 2...
I1026 00:55:05.475539 1903127 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.475833 1903127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 00:55:05.476562 1903127 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.476718 1903127 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.477225 1903127 cli_runner.go:164] Run: docker container inspect functional-469870 --format={{.State.Status}}
I1026 00:55:05.507653 1903127 ssh_runner.go:195] Run: systemctl --version
I1026 00:55:05.507707 1903127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-469870
I1026 00:55:05.530465 1903127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/functional-469870/id_rsa Username:docker}
I1026 00:55:05.625215 1903127 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-469870 image ls --format json --alsologtostderr:
[{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size"
:"16948420"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"26768683"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-469870"],"size":"2173567"},{"id":"sha256:4b196525bd3cc6aa7a72b
a63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600252"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ce
eb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"25612805"},{"id":"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"23872272"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b64025e8ef40c42cee58b16a16791f03863397b16d3961d4972476816eee19f6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-46987
0"],"size":"991"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"18429679"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-469870 image ls --format json --alsologtostderr:
I1026 00:55:05.462880 1903126 out.go:345] Setting OutFile to fd 1 ...
I1026 00:55:05.463076 1903126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.463109 1903126 out.go:358] Setting ErrFile to fd 2...
I1026 00:55:05.463131 1903126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.463506 1903126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 00:55:05.464329 1903126 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.464468 1903126 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.465096 1903126 cli_runner.go:164] Run: docker container inspect functional-469870 --format={{.State.Status}}
I1026 00:55:05.493460 1903126 ssh_runner.go:195] Run: systemctl --version
I1026 00:55:05.494044 1903126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-469870
I1026 00:55:05.517872 1903126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/functional-469870/id_rsa Username:docker}
I1026 00:55:05.607765 1903126 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-469870 image ls --format yaml --alsologtostderr:
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
repoTags:
- docker.io/library/nginx:latest
size: "69600252"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-469870
size: "2173567"
- id: sha256:b64025e8ef40c42cee58b16a16791f03863397b16d3961d4972476816eee19f6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-469870
size: "991"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "25612805"
- id: sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "23872272"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "26768683"
- id: sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "18429679"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-469870 image ls --format yaml --alsologtostderr:
I1026 00:55:05.189690 1903063 out.go:345] Setting OutFile to fd 1 ...
I1026 00:55:05.189836 1903063 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.189846 1903063 out.go:358] Setting ErrFile to fd 2...
I1026 00:55:05.189852 1903063 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:05.190110 1903063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 00:55:05.190831 1903063 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.190987 1903063 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:05.191621 1903063 cli_runner.go:164] Run: docker container inspect functional-469870 --format={{.State.Status}}
I1026 00:55:05.209661 1903063 ssh_runner.go:195] Run: systemctl --version
I1026 00:55:05.209715 1903063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-469870
I1026 00:55:05.231920 1903063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/functional-469870/id_rsa Username:docker}
I1026 00:55:05.319838 1903063 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-469870 ssh pgrep buildkitd: exit status 1 (277.191589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image build -t localhost/my-image:functional-469870 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 image build -t localhost/my-image:functional-469870 testdata/build --alsologtostderr: (3.157556062s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-469870 image build -t localhost/my-image:functional-469870 testdata/build --alsologtostderr:
I1026 00:55:06.018820 1903246 out.go:345] Setting OutFile to fd 1 ...
I1026 00:55:06.022088 1903246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:06.022178 1903246 out.go:358] Setting ErrFile to fd 2...
I1026 00:55:06.022200 1903246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1026 00:55:06.023693 1903246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
I1026 00:55:06.024588 1903246 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:06.025357 1903246 config.go:182] Loaded profile config "functional-469870": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1026 00:55:06.025982 1903246 cli_runner.go:164] Run: docker container inspect functional-469870 --format={{.State.Status}}
I1026 00:55:06.054401 1903246 ssh_runner.go:195] Run: systemctl --version
I1026 00:55:06.054458 1903246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-469870
I1026 00:55:06.074523 1903246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35023 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/functional-469870/id_rsa Username:docker}
I1026 00:55:06.164086 1903246 build_images.go:161] Building image from path: /tmp/build.2044433243.tar
I1026 00:55:06.164210 1903246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 00:55:06.173841 1903246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2044433243.tar
I1026 00:55:06.177757 1903246 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2044433243.tar: stat -c "%s %y" /var/lib/minikube/build/build.2044433243.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2044433243.tar': No such file or directory
I1026 00:55:06.177790 1903246 ssh_runner.go:362] scp /tmp/build.2044433243.tar --> /var/lib/minikube/build/build.2044433243.tar (3072 bytes)
I1026 00:55:06.208167 1903246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2044433243
I1026 00:55:06.218198 1903246 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2044433243 -xf /var/lib/minikube/build/build.2044433243.tar
I1026 00:55:06.228426 1903246 containerd.go:394] Building image: /var/lib/minikube/build/build.2044433243
I1026 00:55:06.228531 1903246 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2044433243 --local dockerfile=/var/lib/minikube/build/build.2044433243 --output type=image,name=localhost/my-image:functional-469870
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c34460401e1380863d8dcff06ee14390c54aac08e700edec2b07c17837e1d36a
#8 exporting manifest sha256:c34460401e1380863d8dcff06ee14390c54aac08e700edec2b07c17837e1d36a 0.0s done
#8 exporting config sha256:7b7d55fbeacffc78151004476106eb8b871987dc194a6467578d514fc57c60dd 0.0s done
#8 naming to localhost/my-image:functional-469870 done
#8 DONE 0.1s
I1026 00:55:09.075685 1903246 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2044433243 --local dockerfile=/var/lib/minikube/build/build.2044433243 --output type=image,name=localhost/my-image:functional-469870: (2.847118518s)
I1026 00:55:09.075777 1903246 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2044433243
I1026 00:55:09.086319 1903246 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2044433243.tar
I1026 00:55:09.095497 1903246 build_images.go:217] Built localhost/my-image:functional-469870 from /tmp/build.2044433243.tar
I1026 00:55:09.095531 1903246 build_images.go:133] succeeded building to: functional-469870
I1026 00:55:09.095537 1903246 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-469870
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image load --daemon kicbase/echo-server:functional-469870 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 image load --daemon kicbase/echo-server:functional-469870 --alsologtostderr: (1.195661369s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image load --daemon kicbase/echo-server:functional-469870 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 image load --daemon kicbase/echo-server:functional-469870 --alsologtostderr: (1.427531797s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-469870
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image load --daemon kicbase/echo-server:functional-469870 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-469870 image load --daemon kicbase/echo-server:functional-469870 --alsologtostderr: (1.061895697s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image save kicbase/echo-server:functional-469870 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image rm kicbase/echo-server:functional-469870 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-469870
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-469870 image save --daemon kicbase/echo-server:functional-469870 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-469870
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-469870
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-469870
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-469870
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-401976 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1026 00:57:09.673003 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-401976 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m12.53328569s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (133.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- rollout status deployment/busybox
E1026 00:57:37.372709 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-401976 -- rollout status deployment/busybox: (29.961286959s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-2xqp5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-82p5s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-w5g8m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-2xqp5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-82p5s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-w5g8m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-2xqp5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-82p5s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-w5g8m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-2xqp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-2xqp5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-82p5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-82p5s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-w5g8m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-401976 -- exec busybox-7dff88458-w5g8m -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-401976 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-401976 -v=7 --alsologtostderr: (20.982248728s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-401976 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.008935821s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 status --output json -v=7 --alsologtostderr: (1.025449088s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp testdata/cp-test.txt ha-401976:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile379830515/001/cp-test_ha-401976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976:/home/docker/cp-test.txt ha-401976-m02:/home/docker/cp-test_ha-401976_ha-401976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test_ha-401976_ha-401976-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976:/home/docker/cp-test.txt ha-401976-m03:/home/docker/cp-test_ha-401976_ha-401976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test_ha-401976_ha-401976-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976:/home/docker/cp-test.txt ha-401976-m04:/home/docker/cp-test_ha-401976_ha-401976-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test_ha-401976_ha-401976-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp testdata/cp-test.txt ha-401976-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile379830515/001/cp-test_ha-401976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m02:/home/docker/cp-test.txt ha-401976:/home/docker/cp-test_ha-401976-m02_ha-401976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test_ha-401976-m02_ha-401976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m02:/home/docker/cp-test.txt ha-401976-m03:/home/docker/cp-test_ha-401976-m02_ha-401976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test_ha-401976-m02_ha-401976-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m02:/home/docker/cp-test.txt ha-401976-m04:/home/docker/cp-test_ha-401976-m02_ha-401976-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test_ha-401976-m02_ha-401976-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp testdata/cp-test.txt ha-401976-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile379830515/001/cp-test_ha-401976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m03:/home/docker/cp-test.txt ha-401976:/home/docker/cp-test_ha-401976-m03_ha-401976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test_ha-401976-m03_ha-401976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m03:/home/docker/cp-test.txt ha-401976-m02:/home/docker/cp-test_ha-401976-m03_ha-401976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test_ha-401976-m03_ha-401976-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m03:/home/docker/cp-test.txt ha-401976-m04:/home/docker/cp-test_ha-401976-m03_ha-401976-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test_ha-401976-m03_ha-401976-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp testdata/cp-test.txt ha-401976-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile379830515/001/cp-test_ha-401976-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m04:/home/docker/cp-test.txt ha-401976:/home/docker/cp-test_ha-401976-m04_ha-401976.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976 "sudo cat /home/docker/cp-test_ha-401976-m04_ha-401976.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m04:/home/docker/cp-test.txt ha-401976-m02:/home/docker/cp-test_ha-401976-m04_ha-401976-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m02 "sudo cat /home/docker/cp-test_ha-401976-m04_ha-401976-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 cp ha-401976-m04:/home/docker/cp-test.txt ha-401976-m03:/home/docker/cp-test_ha-401976-m04_ha-401976-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 ssh -n ha-401976-m03 "sudo cat /home/docker/cp-test_ha-401976-m04_ha-401976-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 node stop m02 -v=7 --alsologtostderr: (12.326011142s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr: exit status 7 (724.658597ms)

                                                
                                                
-- stdout --
	ha-401976
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401976-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401976-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-401976-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 00:58:55.632326 1919534 out.go:345] Setting OutFile to fd 1 ...
	I1026 00:58:55.632500 1919534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:58:55.632509 1919534 out.go:358] Setting ErrFile to fd 2...
	I1026 00:58:55.632515 1919534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 00:58:55.632752 1919534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 00:58:55.632944 1919534 out.go:352] Setting JSON to false
	I1026 00:58:55.632988 1919534 mustload.go:65] Loading cluster: ha-401976
	I1026 00:58:55.633054 1919534 notify.go:220] Checking for updates...
	I1026 00:58:55.633463 1919534 config.go:182] Loaded profile config "ha-401976": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 00:58:55.633502 1919534 status.go:174] checking status of ha-401976 ...
	I1026 00:58:55.634373 1919534 cli_runner.go:164] Run: docker container inspect ha-401976 --format={{.State.Status}}
	I1026 00:58:55.654759 1919534 status.go:371] ha-401976 host status = "Running" (err=<nil>)
	I1026 00:58:55.654780 1919534 host.go:66] Checking if "ha-401976" exists ...
	I1026 00:58:55.655076 1919534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401976
	I1026 00:58:55.678630 1919534 host.go:66] Checking if "ha-401976" exists ...
	I1026 00:58:55.678928 1919534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 00:58:55.678975 1919534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401976
	I1026 00:58:55.702653 1919534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35028 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/ha-401976/id_rsa Username:docker}
	I1026 00:58:55.792800 1919534 ssh_runner.go:195] Run: systemctl --version
	I1026 00:58:55.797316 1919534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:58:55.809337 1919534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 00:58:55.868030 1919534 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-26 00:58:55.85784726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 00:58:55.868641 1919534 kubeconfig.go:125] found "ha-401976" server: "https://192.168.49.254:8443"
	I1026 00:58:55.868686 1919534 api_server.go:166] Checking apiserver status ...
	I1026 00:58:55.868733 1919534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 00:58:55.880036 1919534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1479/cgroup
	I1026 00:58:55.889760 1919534 api_server.go:182] apiserver freezer: "12:freezer:/docker/0a4b3384ebbb752e7a023a9768bcf0e73b73e336a891a37ef14c0bd253a511e2/kubepods/burstable/pod740fbd2534c9a746436db399e586a69b/0f07d4f78be734b0e89c37323cdea4774d8c43b9e073a4256949e32eb059b7e0"
	I1026 00:58:55.889834 1919534 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0a4b3384ebbb752e7a023a9768bcf0e73b73e336a891a37ef14c0bd253a511e2/kubepods/burstable/pod740fbd2534c9a746436db399e586a69b/0f07d4f78be734b0e89c37323cdea4774d8c43b9e073a4256949e32eb059b7e0/freezer.state
	I1026 00:58:55.898822 1919534 api_server.go:204] freezer state: "THAWED"
	I1026 00:58:55.898852 1919534 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 00:58:55.906916 1919534 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 00:58:55.906945 1919534 status.go:463] ha-401976 apiserver status = Running (err=<nil>)
	I1026 00:58:55.906957 1919534 status.go:176] ha-401976 status: &{Name:ha-401976 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 00:58:55.906974 1919534 status.go:174] checking status of ha-401976-m02 ...
	I1026 00:58:55.907427 1919534 cli_runner.go:164] Run: docker container inspect ha-401976-m02 --format={{.State.Status}}
	I1026 00:58:55.924180 1919534 status.go:371] ha-401976-m02 host status = "Stopped" (err=<nil>)
	I1026 00:58:55.924204 1919534 status.go:384] host is not running, skipping remaining checks
	I1026 00:58:55.924211 1919534 status.go:176] ha-401976-m02 status: &{Name:ha-401976-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 00:58:55.924246 1919534 status.go:174] checking status of ha-401976-m03 ...
	I1026 00:58:55.924621 1919534 cli_runner.go:164] Run: docker container inspect ha-401976-m03 --format={{.State.Status}}
	I1026 00:58:55.940780 1919534 status.go:371] ha-401976-m03 host status = "Running" (err=<nil>)
	I1026 00:58:55.940807 1919534 host.go:66] Checking if "ha-401976-m03" exists ...
	I1026 00:58:55.941121 1919534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401976-m03
	I1026 00:58:55.960391 1919534 host.go:66] Checking if "ha-401976-m03" exists ...
	I1026 00:58:55.960692 1919534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 00:58:55.960738 1919534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401976-m03
	I1026 00:58:55.978602 1919534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35038 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/ha-401976-m03/id_rsa Username:docker}
	I1026 00:58:56.072875 1919534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:58:56.086437 1919534 kubeconfig.go:125] found "ha-401976" server: "https://192.168.49.254:8443"
	I1026 00:58:56.086472 1919534 api_server.go:166] Checking apiserver status ...
	I1026 00:58:56.086519 1919534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 00:58:56.098634 1919534 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup
	I1026 00:58:56.108586 1919534 api_server.go:182] apiserver freezer: "12:freezer:/docker/464c78c957a2ef2aad3752db9c09a7c7b480c51ed8931f69d721df094afe139d/kubepods/burstable/pod7c7ae713c09e06087a385b8397a1f5b7/356c3871caed980fe982dd01c7392fb5bce614e64347bc6a2dc879ed64be1391"
	I1026 00:58:56.108664 1919534 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/464c78c957a2ef2aad3752db9c09a7c7b480c51ed8931f69d721df094afe139d/kubepods/burstable/pod7c7ae713c09e06087a385b8397a1f5b7/356c3871caed980fe982dd01c7392fb5bce614e64347bc6a2dc879ed64be1391/freezer.state
	I1026 00:58:56.117732 1919534 api_server.go:204] freezer state: "THAWED"
	I1026 00:58:56.117771 1919534 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 00:58:56.126069 1919534 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 00:58:56.126109 1919534 status.go:463] ha-401976-m03 apiserver status = Running (err=<nil>)
	I1026 00:58:56.126121 1919534 status.go:176] ha-401976-m03 status: &{Name:ha-401976-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 00:58:56.126141 1919534 status.go:174] checking status of ha-401976-m04 ...
	I1026 00:58:56.126463 1919534 cli_runner.go:164] Run: docker container inspect ha-401976-m04 --format={{.State.Status}}
	I1026 00:58:56.144848 1919534 status.go:371] ha-401976-m04 host status = "Running" (err=<nil>)
	I1026 00:58:56.144874 1919534 host.go:66] Checking if "ha-401976-m04" exists ...
	I1026 00:58:56.145268 1919534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-401976-m04
	I1026 00:58:56.167578 1919534 host.go:66] Checking if "ha-401976-m04" exists ...
	I1026 00:58:56.167920 1919534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 00:58:56.168014 1919534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-401976-m04
	I1026 00:58:56.186779 1919534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35043 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/ha-401976-m04/id_rsa Username:docker}
	I1026 00:58:56.278003 1919534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 00:58:56.294619 1919534 status.go:176] ha-401976-m04 status: &{Name:ha-401976-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 node start m02 -v=7 --alsologtostderr
E1026 00:59:17.061755 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.068236 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.079750 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.101157 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.142563 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.224136 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.385746 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:17.707515 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:18.349126 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:19.630806 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:22.192160 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 node start m02 -v=7 --alsologtostderr: (29.981234835s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
E1026 00:59:27.314069 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.003738898s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-401976 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-401976 -v=7 --alsologtostderr
E1026 00:59:37.556340 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 00:59:58.037808 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-401976 -v=7 --alsologtostderr: (37.256243136s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-401976 --wait=true -v=7 --alsologtostderr
E1026 01:00:39.002850 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-401976 --wait=true -v=7 --alsologtostderr: (1m38.710014117s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-401976
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (136.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 node delete m03 -v=7 --alsologtostderr: (9.783927351s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 stop -v=7 --alsologtostderr
E1026 01:02:00.924958 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:02:09.672630 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 stop -v=7 --alsologtostderr: (35.976943689s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr: exit status 7 (119.609373ms)

                                                
                                                
-- stdout --
	ha-401976
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401976-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-401976-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:02:32.856848 1933840 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:02:32.857262 1933840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:02:32.857276 1933840 out.go:358] Setting ErrFile to fd 2...
	I1026 01:02:32.857283 1933840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:02:32.857548 1933840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 01:02:32.857751 1933840 out.go:352] Setting JSON to false
	I1026 01:02:32.857788 1933840 mustload.go:65] Loading cluster: ha-401976
	I1026 01:02:32.858217 1933840 config.go:182] Loaded profile config "ha-401976": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 01:02:32.858252 1933840 status.go:174] checking status of ha-401976 ...
	I1026 01:02:32.858818 1933840 cli_runner.go:164] Run: docker container inspect ha-401976 --format={{.State.Status}}
	I1026 01:02:32.859405 1933840 notify.go:220] Checking for updates...
	I1026 01:02:32.876846 1933840 status.go:371] ha-401976 host status = "Stopped" (err=<nil>)
	I1026 01:02:32.876867 1933840 status.go:384] host is not running, skipping remaining checks
	I1026 01:02:32.876874 1933840 status.go:176] ha-401976 status: &{Name:ha-401976 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:02:32.876905 1933840 status.go:174] checking status of ha-401976-m02 ...
	I1026 01:02:32.877212 1933840 cli_runner.go:164] Run: docker container inspect ha-401976-m02 --format={{.State.Status}}
	I1026 01:02:32.899740 1933840 status.go:371] ha-401976-m02 host status = "Stopped" (err=<nil>)
	I1026 01:02:32.899759 1933840 status.go:384] host is not running, skipping remaining checks
	I1026 01:02:32.899766 1933840 status.go:176] ha-401976-m02 status: &{Name:ha-401976-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:02:32.899785 1933840 status.go:174] checking status of ha-401976-m04 ...
	I1026 01:02:32.900092 1933840 cli_runner.go:164] Run: docker container inspect ha-401976-m04 --format={{.State.Status}}
	I1026 01:02:32.923810 1933840 status.go:371] ha-401976-m04 host status = "Stopped" (err=<nil>)
	I1026 01:02:32.923831 1933840 status.go:384] host is not running, skipping remaining checks
	I1026 01:02:32.923838 1933840 status.go:176] ha-401976-m04 status: &{Name:ha-401976-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (43.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-401976 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-401976 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (41.813005101s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr: (1.07298313s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (43.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-401976 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-401976 --control-plane -v=7 --alsologtostderr: (41.608981371s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-401976 status -v=7 --alsologtostderr: (1.034318438s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.561371882s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-716516 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1026 01:04:17.063409 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:04:44.767090 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-716516 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.214294865s)
--- PASS: TestJSONOutput/start/Command (51.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.11s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-716516 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 pause -p json-output-716516 --output=json --user=testUser: (1.110643743s)
--- PASS: TestJSONOutput/pause/Command (1.11s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-716516 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-716516 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-716516 --output=json --user=testUser: (5.771304606s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-350147 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-350147 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.742517ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2bc62012-60b1-4474-a2ac-bad4148fa91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-350147] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5c1f5c1-db31-406e-a15f-2ad934020723","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19868"}}
	{"specversion":"1.0","id":"db31ae3e-fe1d-4dd8-92ec-a3cd435091de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66ddcd7c-78fc-41b0-9d9b-3d94fdc7e8b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig"}}
	{"specversion":"1.0","id":"352029a1-25cf-4afe-8732-402a448e9a4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube"}}
	{"specversion":"1.0","id":"c8bcfd2d-8a5a-4f34-8b8b-9271a23b97dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"294bcc73-8190-4928-9630-b9792b4036d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d9c7cca0-c97b-4c98-ae45-3d2edb7a45ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-350147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-350147
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-935514 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-935514 --network=: (37.884223519s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-935514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-935514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-935514: (2.100554519s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-207297 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-207297 --network=bridge: (32.861445534s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-207297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-207297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-207297: (2.036101153s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.93s)

                                                
                                    
x
+
TestKicExistingNetwork (33.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1026 01:06:28.172649 1864373 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1026 01:06:28.188144 1864373 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1026 01:06:28.188227 1864373 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1026 01:06:28.188245 1864373 cli_runner.go:164] Run: docker network inspect existing-network
W1026 01:06:28.203419 1864373 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1026 01:06:28.203448 1864373 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1026 01:06:28.203461 1864373 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1026 01:06:28.204283 1864373 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 01:06:28.221264 1864373 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b80904004ad6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8f:a1:c9:9e} reservation:<nil>}
I1026 01:06:28.221639 1864373 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cef2e0}
I1026 01:06:28.221674 1864373 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1026 01:06:28.221727 1864373 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1026 01:06:28.293379 1864373 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-403712 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-403712 --network=existing-network: (31.256165356s)
helpers_test.go:175: Cleaning up "existing-network-403712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-403712
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-403712: (2.005313131s)
I1026 01:07:01.571046 1864373 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.42s)

                                                
                                    
x
+
TestKicCustomSubnet (33.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-638317 --subnet=192.168.60.0/24
E1026 01:07:09.672687 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-638317 --subnet=192.168.60.0/24: (31.28889735s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-638317 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-638317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-638317
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-638317: (2.082660571s)
--- PASS: TestKicCustomSubnet (33.39s)

                                                
                                    
x
+
TestKicStaticIP (33.85s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-453068 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-453068 --static-ip=192.168.200.200: (31.565787889s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-453068 ip
helpers_test.go:175: Cleaning up "static-ip-453068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-453068
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-453068: (2.090805398s)
--- PASS: TestKicStaticIP (33.85s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-583679 --driver=docker  --container-runtime=containerd
E1026 01:08:32.735504 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-583679 --driver=docker  --container-runtime=containerd: (30.348055833s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-586772 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-586772 --driver=docker  --container-runtime=containerd: (33.810638128s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-583679
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-586772
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-586772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-586772
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-586772: (2.061596232s)
helpers_test.go:175: Cleaning up "first-583679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-583679
E1026 01:09:17.061658 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-583679: (1.970667356s)
--- PASS: TestMinikubeProfile (69.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-628769 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-628769 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.135184489s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-628769 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-630550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-630550 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.541883789s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-630550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-628769 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-628769 --alsologtostderr -v=5: (1.637253207s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-630550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-630550
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-630550: (1.204273277s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-630550
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-630550: (6.462433068s)
--- PASS: TestMountStart/serial/RestartStopped (7.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-630550 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072528 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072528 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m44.902705962s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-072528 -- rollout status deployment/busybox: (17.159293503s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-2tlh5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-k2h9z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-2tlh5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-k2h9z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-2tlh5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-k2h9z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-2tlh5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-2tlh5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-k2h9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072528 -- exec busybox-7dff88458-k2h9z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-072528 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-072528 -v 3 --alsologtostderr: (17.259113565s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-072528 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1026 01:12:09.672650 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp testdata/cp-test.txt multinode-072528:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1588531904/001/cp-test_multinode-072528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528:/home/docker/cp-test.txt multinode-072528-m02:/home/docker/cp-test_multinode-072528_multinode-072528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m02 "sudo cat /home/docker/cp-test_multinode-072528_multinode-072528-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528:/home/docker/cp-test.txt multinode-072528-m03:/home/docker/cp-test_multinode-072528_multinode-072528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m03 "sudo cat /home/docker/cp-test_multinode-072528_multinode-072528-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp testdata/cp-test.txt multinode-072528-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1588531904/001/cp-test_multinode-072528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528-m02:/home/docker/cp-test.txt multinode-072528:/home/docker/cp-test_multinode-072528-m02_multinode-072528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528 "sudo cat /home/docker/cp-test_multinode-072528-m02_multinode-072528.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528-m02:/home/docker/cp-test.txt multinode-072528-m03:/home/docker/cp-test_multinode-072528-m02_multinode-072528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m03 "sudo cat /home/docker/cp-test_multinode-072528-m02_multinode-072528-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp testdata/cp-test.txt multinode-072528-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1588531904/001/cp-test_multinode-072528-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528-m03:/home/docker/cp-test.txt multinode-072528:/home/docker/cp-test_multinode-072528-m03_multinode-072528.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528 "sudo cat /home/docker/cp-test_multinode-072528-m03_multinode-072528.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 cp multinode-072528-m03:/home/docker/cp-test.txt multinode-072528-m02:/home/docker/cp-test_multinode-072528-m03_multinode-072528-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 ssh -n multinode-072528-m02 "sudo cat /home/docker/cp-test_multinode-072528-m03_multinode-072528-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-072528 node stop m03: (1.235078043s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072528 status: exit status 7 (530.845396ms)

                                                
                                                
-- stdout --
	multinode-072528
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-072528-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-072528-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr: exit status 7 (528.064028ms)

                                                
                                                
-- stdout --
	multinode-072528
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-072528-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-072528-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:12:22.077543 1987275 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:12:22.077759 1987275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:12:22.077789 1987275 out.go:358] Setting ErrFile to fd 2...
	I1026 01:12:22.077811 1987275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:12:22.078098 1987275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 01:12:22.078339 1987275 out.go:352] Setting JSON to false
	I1026 01:12:22.078392 1987275 mustload.go:65] Loading cluster: multinode-072528
	I1026 01:12:22.078488 1987275 notify.go:220] Checking for updates...
	I1026 01:12:22.078924 1987275 config.go:182] Loaded profile config "multinode-072528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 01:12:22.078970 1987275 status.go:174] checking status of multinode-072528 ...
	I1026 01:12:22.079630 1987275 cli_runner.go:164] Run: docker container inspect multinode-072528 --format={{.State.Status}}
	I1026 01:12:22.099133 1987275 status.go:371] multinode-072528 host status = "Running" (err=<nil>)
	I1026 01:12:22.099156 1987275 host.go:66] Checking if "multinode-072528" exists ...
	I1026 01:12:22.099518 1987275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-072528
	I1026 01:12:22.131203 1987275 host.go:66] Checking if "multinode-072528" exists ...
	I1026 01:12:22.131577 1987275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:12:22.131633 1987275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-072528
	I1026 01:12:22.148910 1987275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35149 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/multinode-072528/id_rsa Username:docker}
	I1026 01:12:22.236578 1987275 ssh_runner.go:195] Run: systemctl --version
	I1026 01:12:22.240961 1987275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:12:22.252645 1987275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:12:22.314164 1987275 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-26 01:12:22.304256346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 01:12:22.314860 1987275 kubeconfig.go:125] found "multinode-072528" server: "https://192.168.67.2:8443"
	I1026 01:12:22.314893 1987275 api_server.go:166] Checking apiserver status ...
	I1026 01:12:22.314947 1987275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 01:12:22.326491 1987275 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	I1026 01:12:22.336067 1987275 api_server.go:182] apiserver freezer: "12:freezer:/docker/dfbccd2c1570860ad19f52ad21f7f509e87735cbb89dd2c0b451183d6f1eef61/kubepods/burstable/pod88e4263886b0d514cd8c3dcce0f5e2f2/dd0f00869fffbc810cea04ad5cf4cda1a4ef81a95c8968f740e4855f1e02e9eb"
	I1026 01:12:22.336144 1987275 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dfbccd2c1570860ad19f52ad21f7f509e87735cbb89dd2c0b451183d6f1eef61/kubepods/burstable/pod88e4263886b0d514cd8c3dcce0f5e2f2/dd0f00869fffbc810cea04ad5cf4cda1a4ef81a95c8968f740e4855f1e02e9eb/freezer.state
	I1026 01:12:22.345152 1987275 api_server.go:204] freezer state: "THAWED"
	I1026 01:12:22.345185 1987275 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1026 01:12:22.352886 1987275 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1026 01:12:22.352922 1987275 status.go:463] multinode-072528 apiserver status = Running (err=<nil>)
	I1026 01:12:22.352934 1987275 status.go:176] multinode-072528 status: &{Name:multinode-072528 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:12:22.352952 1987275 status.go:174] checking status of multinode-072528-m02 ...
	I1026 01:12:22.353292 1987275 cli_runner.go:164] Run: docker container inspect multinode-072528-m02 --format={{.State.Status}}
	I1026 01:12:22.380017 1987275 status.go:371] multinode-072528-m02 host status = "Running" (err=<nil>)
	I1026 01:12:22.380047 1987275 host.go:66] Checking if "multinode-072528-m02" exists ...
	I1026 01:12:22.380390 1987275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-072528-m02
	I1026 01:12:22.401995 1987275 host.go:66] Checking if "multinode-072528-m02" exists ...
	I1026 01:12:22.402302 1987275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 01:12:22.402358 1987275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-072528-m02
	I1026 01:12:22.420263 1987275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35154 SSHKeyPath:/home/jenkins/minikube-integration/19868-1857747/.minikube/machines/multinode-072528-m02/id_rsa Username:docker}
	I1026 01:12:22.508720 1987275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 01:12:22.519978 1987275 status.go:176] multinode-072528-m02 status: &{Name:multinode-072528-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:12:22.520016 1987275 status.go:174] checking status of multinode-072528-m03 ...
	I1026 01:12:22.520338 1987275 cli_runner.go:164] Run: docker container inspect multinode-072528-m03 --format={{.State.Status}}
	I1026 01:12:22.543417 1987275 status.go:371] multinode-072528-m03 host status = "Stopped" (err=<nil>)
	I1026 01:12:22.543440 1987275 status.go:384] host is not running, skipping remaining checks
	I1026 01:12:22.543448 1987275 status.go:176] multinode-072528-m03 status: &{Name:multinode-072528-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-072528 node start m03 -v=7 --alsologtostderr: (9.170253384s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-072528
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-072528
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-072528: (25.015707749s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072528 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072528 --wait=true -v=8 --alsologtostderr: (1m18.627712526s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-072528
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 node delete m03
E1026 01:14:17.061516 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-072528 node delete m03: (4.960272742s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-072528 stop: (23.9386287s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072528 status: exit status 7 (99.944413ms)

                                                
                                                
-- stdout --
	multinode-072528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-072528-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr: exit status 7 (102.579991ms)

                                                
                                                
-- stdout --
	multinode-072528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-072528-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:14:46.003640 1995716 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:14:46.004236 1995716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:14:46.004255 1995716 out.go:358] Setting ErrFile to fd 2...
	I1026 01:14:46.004261 1995716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:14:46.004712 1995716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 01:14:46.005031 1995716 out.go:352] Setting JSON to false
	I1026 01:14:46.005064 1995716 mustload.go:65] Loading cluster: multinode-072528
	I1026 01:14:46.007032 1995716 notify.go:220] Checking for updates...
	I1026 01:14:46.007052 1995716 config.go:182] Loaded profile config "multinode-072528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 01:14:46.007201 1995716 status.go:174] checking status of multinode-072528 ...
	I1026 01:14:46.009436 1995716 cli_runner.go:164] Run: docker container inspect multinode-072528 --format={{.State.Status}}
	I1026 01:14:46.028317 1995716 status.go:371] multinode-072528 host status = "Stopped" (err=<nil>)
	I1026 01:14:46.028349 1995716 status.go:384] host is not running, skipping remaining checks
	I1026 01:14:46.028357 1995716 status.go:176] multinode-072528 status: &{Name:multinode-072528 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 01:14:46.028397 1995716 status.go:174] checking status of multinode-072528-m02 ...
	I1026 01:14:46.028709 1995716 cli_runner.go:164] Run: docker container inspect multinode-072528-m02 --format={{.State.Status}}
	I1026 01:14:46.053542 1995716 status.go:371] multinode-072528-m02 host status = "Stopped" (err=<nil>)
	I1026 01:14:46.053569 1995716 status.go:384] host is not running, skipping remaining checks
	I1026 01:14:46.053577 1995716 status.go:176] multinode-072528-m02 status: &{Name:multinode-072528-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072528 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1026 01:15:40.128467 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072528 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.972139606s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072528 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-072528
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072528-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-072528-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.494298ms)

                                                
                                                
-- stdout --
	* [multinode-072528-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-072528-m02' is duplicated with machine name 'multinode-072528-m02' in profile 'multinode-072528'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072528-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072528-m03 --driver=docker  --container-runtime=containerd: (33.977154195s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-072528
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-072528: exit status 80 (358.561279ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-072528 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-072528-m03 already exists in multinode-072528-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-072528-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-072528-m03: (2.03743966s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.52s)

                                                
                                    
x
+
TestPreload (113.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-853895 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1026 01:17:09.673489 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-853895 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.447959913s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-853895 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-853895 image pull gcr.io/k8s-minikube/busybox: (1.974315841s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-853895
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-853895: (12.071323261s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-853895 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-853895 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.059090906s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-853895 image list
helpers_test.go:175: Cleaning up "test-preload-853895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-853895
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-853895: (2.595316817s)
--- PASS: TestPreload (113.47s)

                                                
                                    
x
+
TestScheduledStopUnix (105.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-145497 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-145497 --memory=2048 --driver=docker  --container-runtime=containerd: (29.675934538s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-145497 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-145497 -n scheduled-stop-145497
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-145497 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 01:18:46.951309 1864373 retry.go:31] will retry after 74.917µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.952838 1864373 retry.go:31] will retry after 138.032µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.954264 1864373 retry.go:31] will retry after 244.343µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.958655 1864373 retry.go:31] will retry after 325.97µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.959804 1864373 retry.go:31] will retry after 574.216µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.960911 1864373 retry.go:31] will retry after 549.858µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.963665 1864373 retry.go:31] will retry after 736.5µs: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.967421 1864373 retry.go:31] will retry after 1.636401ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.969277 1864373 retry.go:31] will retry after 2.863366ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.972447 1864373 retry.go:31] will retry after 2.556182ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.977031 1864373 retry.go:31] will retry after 5.173539ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.983409 1864373 retry.go:31] will retry after 9.597263ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:46.993963 1864373 retry.go:31] will retry after 14.559877ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:47.009264 1864373 retry.go:31] will retry after 27.352997ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
I1026 01:18:47.037497 1864373 retry.go:31] will retry after 39.201462ms: open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/scheduled-stop-145497/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-145497 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-145497 -n scheduled-stop-145497
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-145497
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-145497 --schedule 15s
E1026 01:19:17.061295 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-145497
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-145497: exit status 7 (77.040715ms)

                                                
                                                
-- stdout --
	scheduled-stop-145497
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-145497 -n scheduled-stop-145497
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-145497 -n scheduled-stop-145497: exit status 7 (81.091318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-145497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-145497
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-145497: (4.704971562s)
--- PASS: TestScheduledStopUnix (105.95s)

                                                
                                    
x
+
TestInsufficientStorage (10.48s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-638085 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-638085 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.960308522s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83703297-c02d-4186-9edf-77ee2961b869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-638085] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"484e4d12-e49a-4beb-b61d-4ba08bbd4e6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19868"}}
	{"specversion":"1.0","id":"afe3568f-22c2-40d6-8190-3b49169d07d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3634ee03-6cc4-49b6-94ab-fd883cf715cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig"}}
	{"specversion":"1.0","id":"db2aee93-d767-4cf2-a78b-cd1315e66f3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube"}}
	{"specversion":"1.0","id":"700cccf8-aef7-4646-85cb-e70e7e7ea384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f58246f3-174f-4a86-8bcd-0873457e03ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"49323b80-c334-483b-a667-563269ab0e30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a885f83b-4987-463a-ab19-c7976a835bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3c299926-30ea-4824-aff2-0ef7d7dce4c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b19983b9-8519-4995-90c1-9a79768dd231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ce7d6d7a-9e73-478d-a718-61e1c3deb876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-638085\" primary control-plane node in \"insufficient-storage-638085\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"30e2655d-fff4-4b29-82ca-a2eac0854ccd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1729876044-19868 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f13f55ee-ed3e-4c78-b80a-6fb2c428682e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9607b35e-0fcb-406a-accb-c562691055ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-638085 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-638085 --output=json --layout=cluster: exit status 7 (296.584785ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-638085","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-638085","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:20:10.970453 2014349 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-638085" does not appear in /home/jenkins/minikube-integration/19868-1857747/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-638085 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-638085 --output=json --layout=cluster: exit status 7 (307.720506ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-638085","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-638085","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 01:20:11.284144 2014413 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-638085" does not appear in /home/jenkins/minikube-integration/19868-1857747/kubeconfig
	E1026 01:20:11.294350 2014413 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/insufficient-storage-638085/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-638085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-638085
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-638085: (1.910796849s)
--- PASS: TestInsufficientStorage (10.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.29203627 start -p running-upgrade-755408 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1026 01:25:12.737281 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.29203627 start -p running-upgrade-755408 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (35.17637012s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-755408 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-755408 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.302346961s)
helpers_test.go:175: Cleaning up "running-upgrade-755408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-755408
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-755408: (2.749508111s)
--- PASS: TestRunningBinaryUpgrade (74.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (345.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.002084035s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-083180
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-083180: (1.492566904s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-083180 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-083180 status --format={{.Host}}: exit status 7 (103.330186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.567529335s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-083180 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (144.974996ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-083180] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-083180
	    minikube start -p kubernetes-upgrade-083180 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0831802 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-083180 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-083180 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.654409224s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-083180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-083180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-083180: (2.565673966s)
--- PASS: TestKubernetesUpgrade (345.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (181.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1441828633 start -p missing-upgrade-968406 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1441828633 start -p missing-upgrade-968406 --memory=2200 --driver=docker  --container-runtime=containerd: (1m37.179949218s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-968406
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-968406: (10.283449943s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-968406
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-968406 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1026 01:22:09.672596 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-968406 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m9.657244691s)
helpers_test.go:175: Cleaning up "missing-upgrade-968406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-968406
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-968406: (2.629916286s)
--- PASS: TestMissingContainerUpgrade (181.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-407829 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-407829 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (81.099489ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-407829] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-407829 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-407829 --driver=docker  --container-runtime=containerd: (40.824626842s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-407829 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-407829 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-407829 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.195833995s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-407829 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-407829 status -o json: exit status 2 (315.909146ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-407829","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-407829
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-407829: (2.013018104s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-407829 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-407829 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.968637757s)
--- PASS: TestNoKubernetes/serial/Start (7.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-407829 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-407829 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.158501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-407829
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-407829: (1.225886686s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-407829 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-407829 --driver=docker  --container-runtime=containerd: (7.252284296s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-407829 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-407829 "sudo systemctl is-active --quiet service kubelet": exit status 1 (372.991735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.139169226 start -p stopped-upgrade-497704 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.139169226 start -p stopped-upgrade-497704 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (33.053578875s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.139169226 -p stopped-upgrade-497704 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.139169226 -p stopped-upgrade-497704 stop: (20.037283372s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-497704 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1026 01:24:17.061805 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-497704 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.426972423s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-497704
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-497704: (1.138037943s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (50.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-275326 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-275326 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.768587879s)
--- PASS: TestPause/serial/Start (50.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-275326 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-275326 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.505786026s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.53s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-275326 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-275326 --alsologtostderr -v=5: (1.061459041s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-275326 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-275326 --output=json --layout=cluster: exit status 2 (465.11587ms)

                                                
                                                
-- stdout --
	{"Name":"pause-275326","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-275326","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-275326 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.90s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.15s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-275326 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-275326 --alsologtostderr -v=5: (1.146228288s)
--- PASS: TestPause/serial/PauseAgain (1.15s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-275326 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-275326 --alsologtostderr -v=5: (3.139664169s)
--- PASS: TestPause/serial/DeletePaused (3.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-275326
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-275326: exit status 1 (20.081065ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-275326: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-762620 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-762620 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (226.480223ms)

                                                
                                                
-- stdout --
	* [false-762620] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 01:27:22.989548 2055063 out.go:345] Setting OutFile to fd 1 ...
	I1026 01:27:22.989675 2055063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:27:22.989680 2055063 out.go:358] Setting ErrFile to fd 2...
	I1026 01:27:22.989684 2055063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1026 01:27:22.989935 2055063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19868-1857747/.minikube/bin
	I1026 01:27:22.990345 2055063 out.go:352] Setting JSON to false
	I1026 01:27:22.991356 2055063 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32993,"bootTime":1729873050,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1026 01:27:22.991435 2055063 start.go:139] virtualization:  
	I1026 01:27:22.995141 2055063 out.go:177] * [false-762620] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1026 01:27:22.996948 2055063 out.go:177]   - MINIKUBE_LOCATION=19868
	I1026 01:27:22.997089 2055063 notify.go:220] Checking for updates...
	I1026 01:27:23.002939 2055063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 01:27:23.006528 2055063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19868-1857747/kubeconfig
	I1026 01:27:23.009210 2055063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19868-1857747/.minikube
	I1026 01:27:23.012753 2055063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1026 01:27:23.014661 2055063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 01:27:23.017287 2055063 config.go:182] Loaded profile config "force-systemd-flag-412022": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
	I1026 01:27:23.017405 2055063 driver.go:394] Setting default libvirt URI to qemu:///system
	I1026 01:27:23.046329 2055063 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1026 01:27:23.046459 2055063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 01:27:23.137583 2055063 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-26 01:27:23.126239586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1026 01:27:23.137694 2055063 docker.go:318] overlay module found
	I1026 01:27:23.141368 2055063 out.go:177] * Using the docker driver based on user configuration
	I1026 01:27:23.143433 2055063 start.go:297] selected driver: docker
	I1026 01:27:23.143455 2055063 start.go:901] validating driver "docker" against <nil>
	I1026 01:27:23.143475 2055063 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 01:27:23.146098 2055063 out.go:201] 
	W1026 01:27:23.148268 2055063 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1026 01:27:23.150130 2055063 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-762620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-762620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-762620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-762620"

                                                
                                                
----------------------- debugLogs end: false-762620 [took: 4.332685607s] --------------------------------
helpers_test.go:175: Cleaning up "false-762620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-762620
--- PASS: TestNetworkPlugins/group/false (4.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1026 01:29:17.061569 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-368787 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m44.566794217s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-368787 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e0c0e6a-43e3-493d-9bca-49de21f10c66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e0c0e6a-43e3-493d-9bca-49de21f10c66] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003891188s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-368787 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-314480 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-314480 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (52.577016488s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-368787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-368787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.392143311s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-368787 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-368787 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-368787 --alsologtostderr -v=3: (12.545067669s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368787 -n old-k8s-version-368787
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-368787 -n old-k8s-version-368787: exit status 7 (140.403736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-368787 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-314480 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f74cbb39-0290-46ce-a935-724b31dcdcf5] Pending
helpers_test.go:344: "busybox" [f74cbb39-0290-46ce-a935-724b31dcdcf5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f74cbb39-0290-46ce-a935-724b31dcdcf5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0095708s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-314480 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-314480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-314480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080616469s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-314480 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-314480 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-314480 --alsologtostderr -v=3: (12.153548456s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480: exit status 7 (82.044174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-314480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-314480 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1026 01:34:17.061556 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:37:09.672462 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-314480 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m26.731420775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2jdmp" [d7a46f7c-96cf-44c3-a761-c80b866cc78a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003698028s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2jdmp" [d7a46f7c-96cf-44c3-a761-c80b866cc78a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004698268s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-314480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-314480 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-314480 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480: exit status 2 (331.689969ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480: exit status 2 (325.174175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-314480 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-314480 -n default-k8s-diff-port-314480
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-892584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-892584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m22.836095869s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zbljx" [b57899a6-decc-4f2a-be19-f9d98f56136d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003839166s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zbljx" [b57899a6-decc-4f2a-be19-f9d98f56136d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004022876s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-368787 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-368787 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-368787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368787 -n old-k8s-version-368787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368787 -n old-k8s-version-368787: exit status 2 (339.517674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-368787 -n old-k8s-version-368787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-368787 -n old-k8s-version-368787: exit status 2 (340.642743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-368787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-368787 -n old-k8s-version-368787
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-368787 -n old-k8s-version-368787
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-696625 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-696625 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (1m0.553611646s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-892584 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee0daf84-2c1f-4818-9150-e08d85c7c486] Pending
helpers_test.go:344: "busybox" [ee0daf84-2c1f-4818-9150-e08d85c7c486] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee0daf84-2c1f-4818-9150-e08d85c7c486] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005447928s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-892584 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-892584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-892584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.286543935s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-892584 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-892584 --alsologtostderr -v=3
E1026 01:39:17.061977 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-892584 --alsologtostderr -v=3: (12.513095409s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-892584 -n embed-certs-892584
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-892584 -n embed-certs-892584: exit status 7 (101.457711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-892584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-892584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-892584 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m26.934011489s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-892584 -n embed-certs-892584
E1026 01:43:53.843367 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-696625 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e242dbd-264c-4cb7-9c8c-ef0041ed1028] Pending
helpers_test.go:344: "busybox" [4e242dbd-264c-4cb7-9c8c-ef0041ed1028] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e242dbd-264c-4cb7-9c8c-ef0041ed1028] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004373636s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-696625 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-696625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-696625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197815411s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-696625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-696625 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-696625 --alsologtostderr -v=3: (12.258358246s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-696625 -n no-preload-696625
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-696625 -n no-preload-696625: exit status 7 (82.861409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-696625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-696625 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1026 01:41:38.366722 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:38.373526 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:38.385015 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:38.406520 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:38.447942 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:38.529223 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:38.690664 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:39.011948 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:39.653967 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:40.935506 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:43.497613 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:48.619389 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:52.738856 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:41:58.861648 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:09.672744 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:19.343699 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:31.901728 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:31.908182 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:31.919686 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:31.941124 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:31.982674 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:32.064110 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:32.225635 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:32.547373 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:33.188971 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:34.470684 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:37.033068 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:42.156999 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:42:52.399037 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:43:00.307373 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:43:12.880426 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-696625 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (4m28.98725516s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-696625 -n no-preload-696625
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zqcsj" [b1bb6389-6bef-4288-8084-eaea0ceaf69b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.025738491s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zqcsj" [b1bb6389-6bef-4288-8084-eaea0ceaf69b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003803408s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-892584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-892584 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-892584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-892584 -n embed-certs-892584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-892584 -n embed-certs-892584: exit status 2 (326.344242ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-892584 -n embed-certs-892584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-892584 -n embed-certs-892584: exit status 2 (326.756957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-892584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-892584 -n embed-certs-892584
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-892584 -n embed-certs-892584
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-727083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1026 01:44:17.062116 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:44:22.232766 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-727083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (39.269715115s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6x9x9" [c34dfacc-9b71-49f7-9271-afd3b337ad83] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004826499s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6x9x9" [c34dfacc-9b71-49f7-9271-afd3b337ad83] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004873402s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-696625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-696625 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-696625 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-696625 --alsologtostderr -v=1: (1.110206915s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-696625 -n no-preload-696625
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-696625 -n no-preload-696625: exit status 2 (398.418116ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-696625 -n no-preload-696625
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-696625 -n no-preload-696625: exit status 2 (371.528729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-696625 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-696625 --alsologtostderr -v=1: (1.079444534s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-696625 -n no-preload-696625
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-696625 -n no-preload-696625
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.709468356s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-727083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-727083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.662630483s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-727083 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-727083 --alsologtostderr -v=3: (3.113047551s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-727083 -n newest-cni-727083
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-727083 -n newest-cni-727083: exit status 7 (111.764814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-727083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-727083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2
E1026 01:45:15.768880 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-727083 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.2: (23.802970288s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-727083 -n newest-cni-727083
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-727083 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-727083 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-727083 --alsologtostderr -v=1: (1.053716682s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-727083 -n newest-cni-727083
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-727083 -n newest-cni-727083: exit status 2 (437.325016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-727083 -n newest-cni-727083
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-727083 -n newest-cni-727083: exit status 2 (469.490024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-727083 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-727083 -n newest-cni-727083
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-727083 -n newest-cni-727083
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.09s)
E1026 01:50:59.748110 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:16.722514 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:16.728875 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:16.740148 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:16.761500 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:16.802872 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:16.884236 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:17.045703 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:17.367490 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:18.023592 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:19.305069 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:21.866889 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:26.988457 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:37.230816 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:38.366750 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.435526 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.441963 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.453477 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.474937 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.516356 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.598476 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:52.759832 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:53.081252 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m23.898256088s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-762620 "pgrep -a kubelet"
I1026 01:46:16.418880 1864373 config.go:182] Loaded profile config "auto-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rngmx" [1ec84678-8ddd-419d-b74f-d3b78ad095a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rngmx" [1ec84678-8ddd-419d-b74f-d3b78ad095a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003803168s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.170221238s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hxlvz" [91d2e0f5-754e-4b25-b25d-ea3330e91a81] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004344613s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-762620 "pgrep -a kubelet"
I1026 01:46:58.791861 1864373 config.go:182] Loaded profile config "kindnet-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dljxt" [40be26e9-98a0-4900-a2ee-cd51fb0e5fd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dljxt" [40be26e9-98a0-4900-a2ee-cd51fb0e5fd2] Running
E1026 01:47:06.074770 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/old-k8s-version-368787/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004626687s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1026 01:47:09.672459 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/addons-701091/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.423794698s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pxzr2" [49f5e396-8a97-455d-b58c-5e0ac8cb1056] Running
E1026 01:47:59.611137 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/default-k8s-diff-port-314480/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00485079s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-762620 "pgrep -a kubelet"
I1026 01:48:02.934164 1864373 config.go:182] Loaded profile config "calico-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mh52w" [df4044a0-b8e7-4085-a9fc-0bfa7f67d740] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mh52w" [df4044a0-b8e7-4085-a9fc-0bfa7f67d740] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005843441s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-762620 "pgrep -a kubelet"
I1026 01:48:27.710160 1864373 config.go:182] Loaded profile config "custom-flannel-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8pvxm" [728c2d7c-1ceb-4b30-88e9-394f1d5c7025] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8pvxm" [728c2d7c-1ceb-4b30-88e9-394f1d5c7025] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00542394s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1026 01:49:00.147606 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m25.788284201s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1026 01:49:17.061118 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/functional-469870/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:37.809517 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:37.815985 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:37.827358 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:37.849242 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:37.890584 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:37.971867 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:38.133472 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:38.455387 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:39.097105 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:40.379415 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:42.941170 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:49:48.063464 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.521771981s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7xjv8" [459ecca3-6855-4c98-86fb-50ce3830d2a1] Running
E1026 01:49:58.304918 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/no-preload-696625/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004150263s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-762620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-762620 "pgrep -a kubelet"
I1026 01:50:04.550949 1864373 config.go:182] Loaded profile config "flannel-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dv26g" [d153c6f1-a87d-41d7-8485-afc7668ecf2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
I1026 01:50:04.934753 1864373 config.go:182] Loaded profile config "enable-default-cni-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
helpers_test.go:344: "netcat-6fc964789b-dv26g" [d153c6f1-a87d-41d7-8485-afc7668ecf2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00403332s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4rpxc" [8753b444-1a6e-40bc-b361-f831e64b3013] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4rpxc" [8753b444-1a6e-40bc-b361-f831e64b3013] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003497021s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-762620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.418334043s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-762620 "pgrep -a kubelet"
E1026 01:51:53.723102 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
I1026 01:51:53.723411 1864373 config.go:182] Loaded profile config "bridge-762620": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-762620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8vtww" [a38a4881-29fc-4597-aca5-4640133f32d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 01:51:55.009914 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-8vtww" [a38a4881-29fc-4597-aca5-4640133f32d6] Running
E1026 01:51:57.574293 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/kindnet-762620/client.crt: no such file or directory" logger="UnhandledError"
E1026 01:51:57.712845 1864373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19868-1857747/.minikube/profiles/auto-762620/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.010045007s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-762620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-762620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-357605 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-357605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-357605
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-789888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-789888
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-762620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-762620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-762620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-762620"

                                                
                                                
----------------------- debugLogs end: kubenet-762620 [took: 3.972884979s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-762620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-762620
--- SKIP: TestNetworkPlugins/group/kubenet (4.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-762620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-762620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-762620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-762620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762620"

                                                
                                                
----------------------- debugLogs end: cilium-762620 [took: 4.942398824s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-762620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-762620
--- SKIP: TestNetworkPlugins/group/cilium (5.14s)

                                                
                                    
Copied to clipboard